00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 294 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.086 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.087 The recommended git tool is: git 00:00:00.087 using credential 00000000-0000-0000-0000-000000000002 00:00:00.089 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.139 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.201 Using shallow fetch with depth 1 00:00:00.201 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.201 > git --version # timeout=10 00:00:00.251 > git --version # 'git version 2.39.2' 00:00:00.251 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.285 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.285 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.766 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.780 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.792 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:08.793 > git config core.sparsecheckout # timeout=10 00:00:08.804 > git read-tree -mu HEAD # timeout=10 00:00:08.819 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:08.838 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:08.838 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:08.928 [Pipeline] Start of Pipeline 00:00:08.940 [Pipeline] library 00:00:08.942 Loading library shm_lib@master 00:00:08.942 Library shm_lib@master is cached. Copying from home. 00:00:08.955 [Pipeline] node 00:00:08.963 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.965 [Pipeline] { 00:00:08.973 [Pipeline] catchError 00:00:08.974 [Pipeline] { 00:00:08.983 [Pipeline] wrap 00:00:08.990 [Pipeline] { 00:00:08.994 [Pipeline] stage 00:00:08.996 [Pipeline] { (Prologue) 00:00:09.186 [Pipeline] sh 00:00:09.521 + logger -p user.info -t JENKINS-CI 00:00:09.540 [Pipeline] echo 00:00:09.541 Node: CYP9 00:00:09.548 [Pipeline] sh 00:00:09.855 [Pipeline] setCustomBuildProperty 00:00:09.867 [Pipeline] echo 00:00:09.868 Cleanup processes 00:00:09.873 [Pipeline] sh 00:00:10.162 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.163 692849 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.178 [Pipeline] sh 00:00:10.468 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.468 ++ grep -v 'sudo pgrep' 00:00:10.468 ++ awk '{print $1}' 00:00:10.468 + sudo kill -9 00:00:10.468 + true 00:00:10.484 [Pipeline] cleanWs 00:00:10.495 [WS-CLEANUP] Deleting project workspace... 00:00:10.495 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.503 [WS-CLEANUP] done 00:00:10.508 [Pipeline] setCustomBuildProperty 00:00:10.522 [Pipeline] sh 00:00:10.811 + sudo git config --global --replace-all safe.directory '*' 00:00:10.911 [Pipeline] httpRequest 00:00:11.332 [Pipeline] echo 00:00:11.334 Sorcerer 10.211.164.101 is alive 00:00:11.344 [Pipeline] retry 00:00:11.346 [Pipeline] { 00:00:11.357 [Pipeline] httpRequest 00:00:11.362 HttpMethod: GET 00:00:11.362 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.363 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.377 Response Code: HTTP/1.1 200 OK 00:00:11.378 Success: Status code 200 is in the accepted range: 200,404 00:00:11.378 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:14.077 [Pipeline] } 00:00:14.094 [Pipeline] // retry 00:00:14.102 [Pipeline] sh 00:00:14.393 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:14.411 [Pipeline] httpRequest 00:00:14.816 [Pipeline] echo 00:00:14.817 Sorcerer 10.211.164.101 is alive 00:00:14.826 [Pipeline] retry 00:00:14.828 [Pipeline] { 00:00:14.842 [Pipeline] httpRequest 00:00:14.847 HttpMethod: GET 00:00:14.847 URL: http://10.211.164.101/packages/spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:00:14.848 Sending request to url: http://10.211.164.101/packages/spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:00:14.868 Response Code: HTTP/1.1 200 OK 00:00:14.868 Success: Status code 200 is in the accepted range: 200,404 00:00:14.869 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:01:24.567 [Pipeline] } 00:01:24.584 [Pipeline] // retry 00:01:24.591 [Pipeline] sh 00:01:24.881 + tar --no-same-owner -xf spdk_5031f0f3b908d6f77b11d1b459e5f8c49753fe3c.tar.gz 00:01:28.197 [Pipeline] sh 00:01:28.487 + git -C spdk log --oneline -n5 00:01:28.487 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:01:28.487 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:01:28.487 0ce363beb spdk_log: introduce spdk_log_ext API 00:01:28.487 412fced1b bdev/compress: unmap support. 00:01:28.487 3791dfc65 nvme: rename spdk_nvme_ctrlr_aer_completion_list 00:01:28.502 [Pipeline] sh 00:01:28.791 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/86/24686/3 00:01:30.177 From https://review.spdk.io/gerrit/spdk/dpdk 00:01:30.177 * branch refs/changes/86/24686/3 -> FETCH_HEAD 00:01:30.191 [Pipeline] sh 00:01:30.481 + git -C spdk/dpdk checkout FETCH_HEAD 00:01:31.423 Previous HEAD position was 8d8db71763 eal/alarm_cancel: Fix thread starvation 00:01:31.423 HEAD is now at ad6cb6153f bus/pci: don't open uio device in secondary process 00:01:31.432 [Pipeline] } 00:01:31.443 [Pipeline] // stage 00:01:31.449 [Pipeline] stage 00:01:31.451 [Pipeline] { (Prepare) 00:01:31.465 [Pipeline] writeFile 00:01:31.478 [Pipeline] sh 00:01:31.765 + logger -p user.info -t JENKINS-CI 00:01:31.778 [Pipeline] sh 00:01:32.065 + logger -p user.info -t JENKINS-CI 00:01:32.079 [Pipeline] sh 00:01:32.368 + cat autorun-spdk.conf 00:01:32.368 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.368 SPDK_TEST_NVMF=1 00:01:32.368 SPDK_TEST_NVME_CLI=1 00:01:32.368 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.368 SPDK_TEST_NVMF_NICS=e810 00:01:32.368 SPDK_TEST_VFIOUSER=1 00:01:32.368 SPDK_RUN_UBSAN=1 00:01:32.368 NET_TYPE=phy 00:01:32.377 RUN_NIGHTLY= 00:01:32.381 [Pipeline] readFile 00:01:32.404 [Pipeline] withEnv 00:01:32.406 [Pipeline] { 00:01:32.417 [Pipeline] sh 00:01:32.706 + set -ex 00:01:32.706 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:32.706 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.706 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.706 ++ SPDK_TEST_NVMF=1 00:01:32.706 ++ SPDK_TEST_NVME_CLI=1 00:01:32.706 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.706 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.706 ++ SPDK_TEST_VFIOUSER=1 00:01:32.706 ++ SPDK_RUN_UBSAN=1 00:01:32.706 ++ NET_TYPE=phy 00:01:32.706 ++ RUN_NIGHTLY= 00:01:32.706 + case $SPDK_TEST_NVMF_NICS in 00:01:32.706 + DRIVERS=ice 00:01:32.706 + [[ tcp == \r\d\m\a ]] 00:01:32.706 + [[ -n ice ]] 00:01:32.706 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:32.706 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:32.706 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:32.706 rmmod: ERROR: Module irdma is not currently loaded 00:01:32.706 rmmod: ERROR: Module i40iw is not currently loaded 00:01:32.706 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:32.706 + true 00:01:32.706 + for D in $DRIVERS 00:01:32.706 + sudo modprobe ice 00:01:32.706 + exit 0 00:01:32.716 [Pipeline] } 00:01:32.730 [Pipeline] // withEnv 00:01:32.735 [Pipeline] } 00:01:32.750 [Pipeline] // stage 00:01:32.759 [Pipeline] catchError 00:01:32.761 [Pipeline] { 00:01:32.775 [Pipeline] timeout 00:01:32.775 Timeout set to expire in 1 hr 0 min 00:01:32.777 [Pipeline] { 00:01:32.800 [Pipeline] stage 00:01:32.802 [Pipeline] { (Tests) 00:01:32.817 [Pipeline] sh 00:01:33.107 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.107 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.107 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.107 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:33.107 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.107 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:33.107 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:33.107 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:33.107 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:33.107 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:33.107 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:33.107 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.107 + source /etc/os-release 00:01:33.107 ++ NAME='Fedora Linux' 00:01:33.107 ++ VERSION='39 (Cloud Edition)' 00:01:33.107 ++ ID=fedora 00:01:33.107 ++ VERSION_ID=39 00:01:33.107 ++ VERSION_CODENAME= 00:01:33.107 ++ PLATFORM_ID=platform:f39 00:01:33.107 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:33.107 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:33.107 ++ LOGO=fedora-logo-icon 00:01:33.107 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:33.107 ++ HOME_URL=https://fedoraproject.org/ 00:01:33.107 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:33.107 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:33.107 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:33.107 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:33.107 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:33.107 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:33.107 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:33.107 ++ SUPPORT_END=2024-11-12 00:01:33.107 ++ VARIANT='Cloud Edition' 00:01:33.107 ++ VARIANT_ID=cloud 00:01:33.107 + uname -a 00:01:33.107 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:33.107 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:36.409 Hugepages 00:01:36.409 node hugesize free / total 00:01:36.409 node0 1048576kB 0 / 0 00:01:36.409 node0 2048kB 0 / 0 00:01:36.409 node1 1048576kB 0 / 0 00:01:36.409 node1 2048kB 0 / 0 00:01:36.409 00:01:36.409 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.409 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:36.409 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:36.409 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:36.409 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:36.409 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:36.409 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:36.409 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:36.409 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:36.409 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:36.409 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:36.409 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:36.409 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:36.409 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:36.409 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:36.409 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:36.410 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:36.410 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:36.410 + rm -f /tmp/spdk-ld-path 00:01:36.410 + source autorun-spdk.conf 00:01:36.410 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.410 ++ SPDK_TEST_NVMF=1 00:01:36.410 ++ SPDK_TEST_NVME_CLI=1 00:01:36.410 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.410 ++ SPDK_TEST_NVMF_NICS=e810 00:01:36.410 ++ SPDK_TEST_VFIOUSER=1 00:01:36.410 ++ SPDK_RUN_UBSAN=1 00:01:36.410 ++ NET_TYPE=phy 00:01:36.410 ++ RUN_NIGHTLY= 00:01:36.410 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.410 + [[ -n '' ]] 00:01:36.410 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.410 + for M in /var/spdk/build-*-manifest.txt 00:01:36.410 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:36.410 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.410 + for M in /var/spdk/build-*-manifest.txt 00:01:36.410 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.410 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.410 + for M in /var/spdk/build-*-manifest.txt 00:01:36.410 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.410 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.410 ++ uname 00:01:36.410 + [[ Linux == \L\i\n\u\x ]] 00:01:36.410 + sudo dmesg -T 00:01:36.410 + sudo dmesg --clear 00:01:36.410 + dmesg_pid=693873 00:01:36.410 + [[ Fedora Linux == FreeBSD ]] 00:01:36.410 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.410 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.410 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.410 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.410 + export FIO_BIN=/usr/src/fio-static/fio 00:01:36.410 + FIO_BIN=/usr/src/fio-static/fio 00:01:36.410 + sudo dmesg -Tw 00:01:36.410 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.410 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.410 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.410 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.410 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.410 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.410 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.410 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.410 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:36.410 Test configuration: 00:01:36.410 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.410 SPDK_TEST_NVMF=1 00:01:36.410 SPDK_TEST_NVME_CLI=1 00:01:36.410 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.410 SPDK_TEST_NVMF_NICS=e810 00:01:36.410 SPDK_TEST_VFIOUSER=1 00:01:36.410 SPDK_RUN_UBSAN=1 00:01:36.410 NET_TYPE=phy 00:01:36.410 RUN_NIGHTLY= 11:35:21 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:36.410 11:35:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:36.410 11:35:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:36.410 11:35:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:36.410 11:35:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:36.410 11:35:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:36.410 11:35:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.410 11:35:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.410 11:35:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.410 11:35:21 -- paths/export.sh@5 -- $ export PATH 00:01:36.410 11:35:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.410 11:35:21 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:36.410 11:35:21 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:36.410 11:35:21 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728639321.XXXXXX 00:01:36.410 11:35:21 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728639321.G8YA0M 00:01:36.410 11:35:21 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:36.410 11:35:21 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:36.410 11:35:21 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:36.410 11:35:21 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:36.410 11:35:21 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:36.410 11:35:21 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:36.410 11:35:21 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:36.410 11:35:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.671 11:35:21 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:36.671 11:35:21 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:36.671 11:35:21 -- pm/common@17 -- $ local monitor 00:01:36.671 11:35:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.671 11:35:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.671 11:35:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.671 11:35:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.671 11:35:21 -- pm/common@21 -- $ date +%s 00:01:36.671 11:35:21 -- pm/common@25 -- $ sleep 1 00:01:36.671 11:35:21 -- pm/common@21 -- $ date +%s 00:01:36.671 11:35:21 -- pm/common@21 -- $ date +%s 00:01:36.671 11:35:21 -- pm/common@21 -- $ date +%s 00:01:36.671 11:35:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728639321 00:01:36.671 11:35:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728639321 00:01:36.671 11:35:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728639321 00:01:36.671 11:35:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728639321 00:01:36.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728639321_collect-cpu-load.pm.log 00:01:36.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728639321_collect-vmstat.pm.log 00:01:36.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728639321_collect-cpu-temp.pm.log 00:01:36.671 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728639321_collect-bmc-pm.bmc.pm.log 00:01:37.616 11:35:22 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:37.616 11:35:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:37.616 11:35:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:37.616 11:35:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:37.616 11:35:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:37.616 Fri Oct 11 09:35:22 AM UTC 2024 00:01:37.616 11:35:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:37.616 v25.01-pre-54-g5031f0f3b 00:01:37.616 11:35:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:37.616 11:35:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:37.616 11:35:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:37.616 11:35:22 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:37.616 11:35:22 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:37.616 11:35:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.616 ************************************ 00:01:37.616 START TEST ubsan 00:01:37.616 ************************************ 00:01:37.616 11:35:22 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:37.616 using ubsan 00:01:37.616 00:01:37.616 real 0m0.001s 00:01:37.616 user 0m0.000s 00:01:37.616 sys 0m0.000s 00:01:37.616 11:35:22 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:37.616 11:35:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:37.616 ************************************ 00:01:37.616 END TEST ubsan 00:01:37.616 ************************************ 00:01:37.616 11:35:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:37.616 11:35:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:37.616 11:35:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:37.616 11:35:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:37.616 11:35:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:37.616 11:35:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:37.616 11:35:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:37.616 11:35:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:37.616 11:35:22 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:37.877 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:37.877 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:38.138 Using 'verbs' RDMA provider 00:01:53.997 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:06.229 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:06.800 Creating mk/config.mk...done. 00:02:06.800 Creating mk/cc.flags.mk...done. 00:02:06.800 Type 'make' to build. 00:02:06.800 11:35:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:06.800 11:35:51 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:06.801 11:35:51 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:06.801 11:35:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.801 ************************************ 00:02:06.801 START TEST make 00:02:06.801 ************************************ 00:02:06.801 11:35:51 make -- common/autotest_common.sh@1125 -- $ make -j144 00:02:07.373 make[1]: Nothing to be done for 'all'. 00:02:08.762 The Meson build system 00:02:08.762 Version: 1.5.0 00:02:08.762 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:08.762 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:08.762 Build type: native build 00:02:08.762 Project name: libvfio-user 00:02:08.762 Project version: 0.0.1 00:02:08.762 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.762 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.762 Host machine cpu family: x86_64 00:02:08.762 Host machine cpu: x86_64 00:02:08.762 Run-time dependency threads found: YES 00:02:08.762 Library dl found: YES 00:02:08.762 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.762 Run-time dependency json-c found: YES 0.17 00:02:08.762 Run-time dependency cmocka found: YES 1.1.7 00:02:08.762 Program pytest-3 found: NO 00:02:08.762 Program flake8 found: NO 00:02:08.762 Program misspell-fixer found: NO 00:02:08.762 Program restructuredtext-lint found: NO 00:02:08.763 Program valgrind found: YES (/usr/bin/valgrind) 00:02:08.763 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.763 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.763 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.763 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.763 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:08.763 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:08.763 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.763 Build targets in project: 8 00:02:08.763 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:08.763 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:08.763 00:02:08.763 libvfio-user 0.0.1 00:02:08.763 00:02:08.763 User defined options 00:02:08.763 buildtype : debug 00:02:08.763 default_library: shared 00:02:08.763 libdir : /usr/local/lib 00:02:08.763 00:02:08.763 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.332 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:09.332 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:09.332 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:09.332 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:09.332 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:09.332 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:09.332 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:09.332 [7/37] Compiling C object samples/null.p/null.c.o 00:02:09.332 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:09.332 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:09.332 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:09.332 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:09.332 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:09.332 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:09.332 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:09.332 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:09.332 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:09.332 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:09.332 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:09.332 [19/37] Compiling C object samples/server.p/server.c.o 00:02:09.332 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:09.332 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:09.332 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:09.332 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:09.332 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:09.332 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:09.332 [26/37] Compiling C object samples/client.p/client.c.o 00:02:09.332 [27/37] Linking target samples/client 00:02:09.332 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:09.593 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:09.593 [30/37] Linking target test/unit_tests 00:02:09.593 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:09.593 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:09.593 [33/37] Linking target samples/server 00:02:09.593 [34/37] Linking target samples/lspci 00:02:09.593 [35/37] Linking target samples/null 00:02:09.593 [36/37] Linking target samples/gpio-pci-idio-16 00:02:09.593 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:09.854 INFO: autodetecting backend as ninja 00:02:09.854 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.854 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:10.116 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:10.116 ninja: no work to do. 00:02:16.710 The Meson build system 00:02:16.710 Version: 1.5.0 00:02:16.710 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:16.710 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:16.710 Build type: native build 00:02:16.710 Program cat found: YES (/usr/bin/cat) 00:02:16.710 Project name: DPDK 00:02:16.710 Project version: 24.07.0 00:02:16.710 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:16.710 C linker for the host machine: cc ld.bfd 2.40-14 00:02:16.710 Host machine cpu family: x86_64 00:02:16.710 Host machine cpu: x86_64 00:02:16.711 Message: ## Building in Developer Mode ## 00:02:16.711 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:16.711 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:16.711 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:16.711 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:16.711 Program cat found: YES (/usr/bin/cat) 00:02:16.711 Compiler for C supports arguments -march=native: YES 00:02:16.711 Checking for size of "void *" : 8 00:02:16.711 Checking for size of "void *" : 8 (cached) 00:02:16.711 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:16.711 Library m found: YES 00:02:16.711 Library numa found: YES 00:02:16.711 Has header "numaif.h" : YES 00:02:16.711 Library fdt found: NO 00:02:16.711 Library execinfo found: NO 00:02:16.711 Has header "execinfo.h" : YES 00:02:16.711 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:16.711 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:16.711 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:16.711 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:16.711 Run-time dependency openssl found: YES 3.1.1 00:02:16.711 Run-time dependency libpcap found: YES 1.10.4 00:02:16.711 Has header "pcap.h" with dependency libpcap: YES 00:02:16.711 Compiler for C supports arguments -Wcast-qual: YES 00:02:16.711 Compiler for C supports arguments -Wdeprecated: YES 00:02:16.711 Compiler for C supports arguments -Wformat: YES 00:02:16.711 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:16.711 Compiler for C supports arguments -Wformat-security: NO 00:02:16.711 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.711 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:16.711 Compiler for C supports arguments -Wnested-externs: YES 00:02:16.711 Compiler for C supports arguments -Wold-style-definition: YES 00:02:16.711 Compiler for C supports arguments -Wpointer-arith: YES 00:02:16.711 Compiler for C supports arguments -Wsign-compare: YES 00:02:16.711 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:16.711 Compiler for C supports arguments -Wundef: YES 00:02:16.711 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.711 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:16.711 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:16.711 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.711 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:16.711 Program objdump found: YES (/usr/bin/objdump) 00:02:16.711 Compiler for C supports arguments -mavx512f: YES 00:02:16.711 Checking if "AVX512 checking" compiles: YES 00:02:16.711 Fetching value of define "__SSE4_2__" : 1 00:02:16.711 Fetching value of define "__AES__" : 1 00:02:16.711 Fetching value of define "__AVX__" : 1 00:02:16.711 Fetching value of define "__AVX2__" : 1 00:02:16.711 Fetching value of define "__AVX512BW__" : 1 00:02:16.711 Fetching value of define "__AVX512CD__" : 1 00:02:16.711 Fetching value of define "__AVX512DQ__" : 1 00:02:16.711 Fetching value of define "__AVX512F__" : 1 00:02:16.711 Fetching value of define "__AVX512VL__" : 1 00:02:16.711 Fetching value of define "__PCLMUL__" : 1 00:02:16.711 Fetching value of define "__RDRND__" : 1 00:02:16.711 Fetching value of define "__RDSEED__" : 1 00:02:16.711 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:16.711 Fetching value of define "__znver1__" : (undefined) 00:02:16.711 Fetching value of define "__znver2__" : (undefined) 00:02:16.711 Fetching value of define "__znver3__" : (undefined) 00:02:16.711 Fetching value of define "__znver4__" : (undefined) 00:02:16.711 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:16.711 Message: lib/log: Defining dependency "log" 00:02:16.711 Message: lib/kvargs: Defining dependency "kvargs" 00:02:16.711 Message: lib/telemetry: Defining dependency "telemetry" 00:02:16.711 Checking for function "getentropy" : NO 00:02:16.711 Message: lib/eal: Defining dependency "eal" 00:02:16.711 Message: lib/ring: Defining dependency "ring" 00:02:16.711 Message: lib/rcu: Defining dependency "rcu" 00:02:16.711 Message: lib/mempool: Defining dependency "mempool" 00:02:16.711 Message: lib/mbuf: Defining dependency "mbuf" 00:02:16.711 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:16.711 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:16.711 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:16.711 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:16.711 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:16.711 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:16.711 Compiler for C supports arguments -mpclmul: YES 00:02:16.711 Compiler for C supports arguments -maes: YES 00:02:16.711 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:16.711 Compiler for C supports arguments -mavx512bw: YES 00:02:16.711 Compiler for C supports arguments -mavx512dq: YES 00:02:16.711 Compiler for C supports arguments -mavx512vl: YES 00:02:16.711 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:16.711 Compiler for C supports arguments -mavx2: YES 00:02:16.711 Compiler for C supports arguments -mavx: YES 00:02:16.711 Message: lib/net: Defining dependency "net" 00:02:16.711 Message: lib/meter: Defining dependency "meter" 00:02:16.711 Message: lib/ethdev: Defining dependency "ethdev" 00:02:16.711 Message: lib/pci: Defining dependency "pci" 00:02:16.711 Message: lib/cmdline: Defining dependency "cmdline" 00:02:16.711 Message: lib/hash: Defining dependency "hash" 00:02:16.711 Message: lib/timer: Defining dependency "timer" 00:02:16.711 Message: lib/compressdev: Defining dependency "compressdev" 00:02:16.711 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:16.711 Message: lib/dmadev: Defining dependency "dmadev" 00:02:16.711 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:16.711 Message: lib/power: Defining dependency "power" 00:02:16.711 Message: lib/reorder: Defining dependency "reorder" 00:02:16.711 Message: lib/security: Defining dependency "security" 00:02:16.711 Has header "linux/userfaultfd.h" : YES 00:02:16.711 Has header "linux/vduse.h" : YES 00:02:16.711 Message: lib/vhost: Defining dependency "vhost" 00:02:16.711 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:16.711 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:16.711 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:16.711 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:16.711 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:16.711 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:16.711 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:16.711 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:16.711 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:16.711 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:16.711 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:16.711 Configuring doxy-api-html.conf using configuration 00:02:16.711 Configuring doxy-api-man.conf using configuration 00:02:16.711 Program mandb found: YES (/usr/bin/mandb) 00:02:16.711 Program sphinx-build found: NO 00:02:16.711 Configuring rte_build_config.h using configuration 00:02:16.711 Message: 00:02:16.711 ================= 00:02:16.711 Applications Enabled 00:02:16.711 ================= 00:02:16.711 00:02:16.711 apps: 00:02:16.711 00:02:16.711 00:02:16.711 Message: 00:02:16.711 ================= 00:02:16.711 Libraries Enabled 00:02:16.711 ================= 00:02:16.711 00:02:16.711 libs: 00:02:16.711 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:16.711 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:16.711 cryptodev, dmadev, power, reorder, security, vhost, 00:02:16.711 00:02:16.711 Message: 00:02:16.711 =============== 00:02:16.711 Drivers Enabled 00:02:16.711 =============== 00:02:16.711 00:02:16.711 common: 00:02:16.711 00:02:16.711 bus: 00:02:16.711 pci, vdev, 00:02:16.711 mempool: 00:02:16.711 ring, 00:02:16.711 dma: 00:02:16.711 00:02:16.711 net: 00:02:16.711 00:02:16.711 crypto: 00:02:16.711 00:02:16.711 compress: 00:02:16.711 00:02:16.711 vdpa: 00:02:16.711 00:02:16.711 00:02:16.711 Message: 00:02:16.711 ================= 00:02:16.711 Content Skipped 00:02:16.711 ================= 00:02:16.711 00:02:16.711 apps: 00:02:16.711 dumpcap: explicitly disabled via build config 00:02:16.711 graph: explicitly disabled via build config 00:02:16.711 pdump: explicitly disabled via build config 00:02:16.711 proc-info: explicitly disabled via build config 00:02:16.711 test-acl: explicitly disabled via build config 00:02:16.711 test-bbdev: explicitly disabled via build config 00:02:16.711 test-cmdline: explicitly disabled via build config 00:02:16.711 test-compress-perf: explicitly disabled via build config 00:02:16.711 test-crypto-perf: explicitly disabled via build config 00:02:16.711 test-dma-perf: explicitly disabled via build config 00:02:16.711 test-eventdev: explicitly disabled via build config 00:02:16.711 test-fib: explicitly disabled via build config 00:02:16.711 test-flow-perf: explicitly disabled via build config 00:02:16.711 test-gpudev: explicitly disabled via build config 00:02:16.711 test-mldev: explicitly disabled via build config 00:02:16.711 test-pipeline: explicitly disabled via build config 00:02:16.711 test-pmd: explicitly disabled via build config 00:02:16.711 test-regex: explicitly disabled via build config 00:02:16.711 test-sad: explicitly disabled via build config 00:02:16.711 test-security-perf: explicitly disabled via build config 00:02:16.711 00:02:16.711 libs: 00:02:16.711 argparse: explicitly disabled via build config 00:02:16.711 ptr_compress: explicitly disabled via build config 00:02:16.711 metrics: explicitly disabled via build config 00:02:16.711 acl: explicitly disabled via build config 00:02:16.711 bbdev: explicitly disabled via build config 00:02:16.711 bitratestats: explicitly disabled via build config 00:02:16.711 bpf: explicitly disabled via build config 00:02:16.711 cfgfile: explicitly disabled via build config 00:02:16.711 distributor: explicitly disabled via build config 00:02:16.711 efd: explicitly disabled via build config 00:02:16.711 eventdev: explicitly disabled via build config 00:02:16.711 dispatcher: explicitly disabled via build config 00:02:16.711 gpudev: explicitly disabled via build config 00:02:16.711 gro: explicitly disabled via build config 00:02:16.712 gso: explicitly disabled via build config 00:02:16.712 ip_frag: explicitly disabled via build config 00:02:16.712 jobstats: explicitly disabled via build config 00:02:16.712 latencystats: explicitly disabled via build config 00:02:16.712 lpm: explicitly disabled via build config 00:02:16.712 member: explicitly disabled via build config 00:02:16.712 pcapng: explicitly disabled via build config 00:02:16.712 rawdev: explicitly disabled via build config 00:02:16.712 regexdev: explicitly disabled via build config 00:02:16.712 mldev: explicitly disabled via build config 00:02:16.712 rib: explicitly disabled via build config 00:02:16.712 sched: explicitly disabled via build config 00:02:16.712 stack: explicitly disabled via build config 00:02:16.712 ipsec: explicitly disabled via build config 00:02:16.712 pdcp: explicitly disabled via build config 00:02:16.712 fib: explicitly disabled via build config 00:02:16.712 port: explicitly disabled via build config 00:02:16.712 pdump: explicitly disabled via build config 00:02:16.712 table: explicitly disabled via build config 00:02:16.712 pipeline: explicitly disabled via build config 00:02:16.712 graph: explicitly disabled via build config 00:02:16.712 node: explicitly disabled via build config 00:02:16.712 00:02:16.712 drivers: 00:02:16.712 common/cpt: not in enabled drivers build config 00:02:16.712 common/dpaax: not in enabled drivers build config 00:02:16.712 common/iavf: not in enabled drivers build config 00:02:16.712 common/idpf: not in enabled drivers build config 00:02:16.712 common/ionic: not in enabled drivers build config 00:02:16.712 common/mvep: not in enabled drivers build config 00:02:16.712 common/octeontx: not in enabled drivers build config 00:02:16.712 bus/auxiliary: not in enabled drivers build config 00:02:16.712 bus/cdx: not in enabled drivers build config 00:02:16.712 bus/dpaa: not in enabled drivers build config 00:02:16.712 bus/fslmc: not in enabled drivers build config 00:02:16.712 bus/ifpga: not in enabled drivers build config 00:02:16.712 bus/platform: not in enabled drivers build config 00:02:16.712 bus/uacce: not in enabled drivers build config 00:02:16.712 bus/vmbus: not in enabled drivers build config 00:02:16.712 common/cnxk: not in enabled drivers build config 00:02:16.712 common/mlx5: not in enabled drivers build config 00:02:16.712 common/nfp: not in enabled drivers build config 00:02:16.712 common/nitrox: not in enabled drivers build config 00:02:16.712 common/qat: not in enabled drivers build config 00:02:16.712 common/sfc_efx: not in enabled drivers build config 00:02:16.712 mempool/bucket: not in enabled drivers build config 00:02:16.712 mempool/cnxk: not in enabled drivers build config 00:02:16.712 mempool/dpaa: not in enabled drivers build config 00:02:16.712 mempool/dpaa2: not in enabled drivers build config 00:02:16.712 mempool/octeontx: not in enabled drivers build config 00:02:16.712 mempool/stack: not in enabled drivers build config 00:02:16.712 dma/cnxk: not in enabled drivers build config 00:02:16.712 dma/dpaa: not in enabled drivers build config 00:02:16.712 dma/dpaa2: not in enabled drivers build config 00:02:16.712 dma/hisilicon: not in enabled drivers build config 00:02:16.712 dma/idxd: not in enabled drivers build config 00:02:16.712 dma/ioat: not in enabled drivers build config 00:02:16.712 dma/odm: not in enabled drivers build config 00:02:16.712 dma/skeleton: not in enabled drivers build config 00:02:16.712 net/af_packet: not in enabled drivers build config 00:02:16.712 net/af_xdp: not in enabled drivers build config 00:02:16.712 net/ark: not in enabled drivers build config 00:02:16.712 net/atlantic: not in enabled drivers build config 00:02:16.712 net/avp: not in enabled drivers build config 00:02:16.712 net/axgbe: not in enabled drivers build config 00:02:16.712 net/bnx2x: not in enabled drivers build config 00:02:16.712 net/bnxt: not in enabled drivers build config 00:02:16.712 net/bonding: not in enabled drivers build config 00:02:16.712 net/cnxk: not in enabled drivers build config 00:02:16.712 net/cpfl: not in enabled drivers build config 00:02:16.712 net/cxgbe: not in enabled drivers build config 00:02:16.712 net/dpaa: not in enabled drivers build config 00:02:16.712 net/dpaa2: not in enabled drivers build config 00:02:16.712 net/e1000: not in enabled drivers build config 00:02:16.712 net/ena: not in enabled drivers build config 00:02:16.712 net/enetc: not in enabled drivers build config 00:02:16.712 net/enetfec: not in enabled drivers build config 00:02:16.712 net/enic: not in enabled drivers build config 00:02:16.712 net/failsafe: not in enabled drivers build config 00:02:16.712 net/fm10k: not in enabled drivers build config 00:02:16.712 net/gve: not in enabled drivers build config 00:02:16.712 net/hinic: not in enabled drivers build config 00:02:16.712 net/hns3: not in enabled drivers build config 00:02:16.712 net/i40e: not in enabled drivers build config 00:02:16.712 net/iavf: not in enabled drivers build config 00:02:16.712 net/ice: not in enabled drivers build config 00:02:16.712 net/idpf: not in enabled drivers build config 00:02:16.712 net/igc: not in enabled drivers build config 00:02:16.712 net/ionic: not in enabled drivers build config 00:02:16.712 net/ipn3ke: not in enabled drivers build config 00:02:16.712 net/ixgbe: not in enabled drivers build config 00:02:16.712 net/mana: not in enabled drivers build config 00:02:16.712 net/memif: not in enabled drivers build config 00:02:16.712 net/mlx4: not in enabled drivers build config 00:02:16.712 net/mlx5: not in enabled drivers build config 00:02:16.712 net/mvneta: not in enabled drivers build config 00:02:16.712 net/mvpp2: not in enabled drivers build config 00:02:16.712 net/netvsc: not in enabled drivers build config 00:02:16.712 net/nfb: not in enabled drivers build config 00:02:16.712 net/nfp: not in enabled drivers build config 00:02:16.712 net/ngbe: not in enabled drivers build config 00:02:16.712 net/ntnic: not in enabled drivers build config 00:02:16.712 net/null: not in enabled drivers build config 00:02:16.712 net/octeontx: not in enabled drivers build config 00:02:16.712 net/octeon_ep: not in enabled drivers build config 00:02:16.712 net/pcap: not in enabled drivers build config 00:02:16.712 net/pfe: not in enabled drivers build config 00:02:16.712 net/qede: not in enabled drivers build config 00:02:16.712 net/ring: not in enabled drivers build config 00:02:16.712 net/sfc: not in enabled drivers build config 00:02:16.712 net/softnic: not in enabled drivers build config 00:02:16.712 net/tap: not in enabled drivers build config 00:02:16.712 net/thunderx: not in enabled drivers build config 00:02:16.712 net/txgbe: not in enabled drivers build config 00:02:16.712 net/vdev_netvsc: not in enabled drivers build config 00:02:16.712 net/vhost: not in enabled drivers build config 00:02:16.712 net/virtio: not in enabled drivers build config 00:02:16.712 net/vmxnet3: not in enabled drivers build config 00:02:16.712 raw/*: missing internal dependency, "rawdev" 00:02:16.712 crypto/armv8: not in enabled drivers build config 00:02:16.712 crypto/bcmfs: not in enabled drivers build config 00:02:16.712 crypto/caam_jr: not in enabled drivers build config 00:02:16.712 crypto/ccp: not in enabled drivers build config 00:02:16.712 crypto/cnxk: not in enabled drivers build config 00:02:16.712 crypto/dpaa_sec: not in enabled drivers build config 00:02:16.712 crypto/dpaa2_sec: not in enabled drivers build config 00:02:16.712 crypto/ionic: not in enabled drivers build config 00:02:16.712 crypto/ipsec_mb: not in enabled drivers build config 00:02:16.712 crypto/mlx5: not in enabled drivers build config 00:02:16.712 crypto/mvsam: not in enabled drivers build config 00:02:16.712 crypto/nitrox: not in enabled drivers build config 00:02:16.712 crypto/null: not in enabled drivers build config 00:02:16.712 crypto/octeontx: not in enabled drivers build config 00:02:16.712 crypto/openssl: not in enabled drivers build config 00:02:16.712 crypto/scheduler: not in enabled drivers build config 00:02:16.712 crypto/uadk: not in enabled drivers build config 00:02:16.712 crypto/virtio: not in enabled drivers build config 00:02:16.712 compress/isal: not in enabled drivers build config 00:02:16.712 compress/mlx5: not in enabled drivers build config 00:02:16.712 compress/nitrox: not in enabled drivers build config 00:02:16.712 compress/octeontx: not in enabled drivers build config 00:02:16.712 compress/uadk: not in enabled drivers build config 00:02:16.712 compress/zlib: not in enabled drivers build config 00:02:16.712 regex/*: missing internal dependency, "regexdev" 00:02:16.712 ml/*: missing internal dependency, "mldev" 00:02:16.712 vdpa/ifc: not in enabled drivers build config 00:02:16.712 vdpa/mlx5: not in enabled drivers build config 00:02:16.712 vdpa/nfp: not in enabled drivers build config 00:02:16.712 vdpa/sfc: not in enabled drivers build config 00:02:16.712 event/*: missing internal dependency, "eventdev" 00:02:16.712 baseband/*: missing internal dependency, "bbdev" 00:02:16.712 gpu/*: missing internal dependency, "gpudev" 00:02:16.712 00:02:16.712 00:02:16.712 Build targets in project: 84 00:02:16.712 00:02:16.712 DPDK 24.07.0 00:02:16.712 00:02:16.712 User defined options 00:02:16.712 buildtype : debug 00:02:16.712 default_library : shared 00:02:16.712 libdir : lib 00:02:16.712 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:16.712 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:16.712 c_link_args : 00:02:16.712 cpu_instruction_set: native 00:02:16.712 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:16.712 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump,ptr_compress 00:02:16.712 enable_docs : false 00:02:16.712 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:16.712 enable_kmods : false 00:02:16.712 max_lcores : 128 00:02:16.712 tests : false 00:02:16.712 00:02:16.712 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.712 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:16.712 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:16.712 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:16.712 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:16.712 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.712 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.712 [6/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.712 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.712 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.712 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.712 [10/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.712 [11/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.713 [12/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.713 [13/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.713 [14/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:16.713 [15/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:16.713 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.713 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:16.713 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.713 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:16.971 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.971 [21/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.971 [22/268] Linking static target lib/librte_kvargs.a 00:02:16.971 [23/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.971 [24/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:16.971 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:16.971 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.971 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.971 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.971 [29/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:16.971 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.971 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.971 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.971 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.971 [34/268] Linking static target lib/librte_log.a 00:02:16.971 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.971 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.971 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.971 [38/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:16.971 [39/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.971 [40/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:16.971 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.971 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.971 [43/268] Linking static target lib/librte_pci.a 00:02:16.971 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.971 [45/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.971 [46/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.971 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:16.971 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.971 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.971 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.971 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.971 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.971 [53/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.971 [54/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.971 [55/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:16.971 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.971 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.971 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.971 [59/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.971 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.971 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.971 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.971 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.971 [64/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.971 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.971 [66/268] Linking static target lib/librte_telemetry.a 00:02:16.971 [67/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.971 [68/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.971 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.971 [70/268] Linking static target lib/librte_meter.a 00:02:16.971 [71/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.971 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.971 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.971 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:16.971 [75/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.971 [76/268] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:16.971 [77/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:16.971 [78/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:17.232 [79/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.232 [80/268] Linking static target lib/librte_ring.a 00:02:17.232 [81/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:17.232 [82/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.232 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:17.232 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:17.232 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.232 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.232 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:17.232 [88/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.232 [89/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.232 [90/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.232 [91/268] Linking static target lib/librte_net.a 00:02:17.232 [92/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:17.232 [93/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.232 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:17.232 [95/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.232 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.232 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:17.232 [98/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.232 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.232 [100/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:17.232 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:17.232 [102/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:17.232 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.232 [104/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.232 [105/268] Linking static target lib/librte_reorder.a 00:02:17.232 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:17.232 [107/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.232 [108/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:17.232 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:17.233 [110/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:17.233 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.233 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:17.233 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:17.233 [114/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.233 [115/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.233 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.233 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:17.233 [118/268] Linking static target lib/librte_timer.a 00:02:17.233 [119/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.233 [120/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:17.233 [121/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.233 [122/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.233 [123/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.233 [124/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.495 [125/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.495 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:17.495 [127/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.495 [128/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.495 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.495 [130/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:17.495 [131/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.495 [132/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.495 [133/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.495 [134/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.495 [135/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:17.495 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.495 [137/268] Linking static target lib/librte_mbuf.a 00:02:17.495 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:17.495 [139/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:17.495 [140/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.495 [141/268] Linking static target lib/librte_cmdline.a 00:02:17.495 [142/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.495 [143/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:17.495 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.495 [145/268] Linking static target lib/librte_mempool.a 00:02:17.495 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:17.495 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:17.495 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.495 [149/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:17.495 [150/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.495 [151/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:17.495 [152/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.495 [153/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:17.495 [154/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.495 [155/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:17.495 [156/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:17.495 [157/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:17.495 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.495 [159/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:17.495 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:17.495 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.495 [162/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.495 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.495 [164/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.495 [165/268] Linking static target lib/librte_compressdev.a 00:02:17.495 [166/268] Linking static target lib/librte_power.a 00:02:17.495 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.495 [168/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:17.495 [169/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.495 [170/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.495 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.495 [172/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.495 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:17.495 [174/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.495 [175/268] Linking static target lib/librte_hash.a 00:02:17.495 [176/268] Linking static target lib/librte_dmadev.a 00:02:17.495 [177/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.495 [178/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.495 [179/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.495 [180/268] Linking static target lib/librte_eal.a 00:02:17.495 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:17.495 [182/268] Linking static target lib/librte_rcu.a 00:02:17.495 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:17.495 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.495 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.756 [186/268] Linking target lib/librte_log.so.24.2 00:02:17.756 [187/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.756 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.756 [189/268] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.756 [190/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.756 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.756 [192/268] Linking static target drivers/librte_bus_vdev.a 00:02:17.756 [193/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:17.756 [194/268] Linking static target lib/librte_security.a 00:02:17.756 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.756 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.756 [197/268] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.756 [198/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.756 [199/268] Linking static target drivers/librte_mempool_ring.a 00:02:17.757 [200/268] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:17.757 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.757 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.757 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.757 [204/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.757 [205/268] Linking target lib/librte_kvargs.so.24.2 00:02:17.757 [206/268] Linking target lib/librte_telemetry.so.24.2 00:02:17.757 [207/268] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.757 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.757 [209/268] Linking static target drivers/librte_bus_pci.a 00:02:18.018 [210/268] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:18.018 [211/268] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:18.018 [212/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.018 [213/268] Linking static target lib/librte_cryptodev.a 00:02:18.018 [214/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.018 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.279 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.280 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.280 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.280 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.280 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.540 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.540 [222/268] Linking static target lib/librte_ethdev.a 00:02:18.540 [223/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.540 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.540 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.802 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.802 [227/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.063 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.063 [229/268] Linking static target lib/librte_vhost.a 00:02:20.460 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.408 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.001 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.386 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.386 [234/268] Linking target lib/librte_eal.so.24.2 00:02:29.386 [235/268] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:29.386 [236/268] Linking target lib/librte_ring.so.24.2 00:02:29.386 [237/268] Linking target lib/librte_meter.so.24.2 00:02:29.386 [238/268] Linking target lib/librte_timer.so.24.2 00:02:29.386 [239/268] Linking target lib/librte_pci.so.24.2 00:02:29.386 [240/268] Linking target lib/librte_dmadev.so.24.2 00:02:29.386 [241/268] Linking target drivers/librte_bus_vdev.so.24.2 00:02:29.386 [242/268] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:29.386 [243/268] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:29.386 [244/268] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:29.386 [245/268] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:29.386 [246/268] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:29.386 [247/268] Linking target lib/librte_rcu.so.24.2 00:02:29.386 [248/268] Linking target lib/librte_mempool.so.24.2 00:02:29.386 [249/268] Linking target drivers/librte_bus_pci.so.24.2 00:02:29.647 [250/268] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:29.647 [251/268] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:29.647 [252/268] Linking target lib/librte_mbuf.so.24.2 00:02:29.647 [253/268] Linking target drivers/librte_mempool_ring.so.24.2 00:02:29.908 [254/268] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:29.908 [255/268] Linking target lib/librte_net.so.24.2 00:02:29.908 [256/268] Linking target lib/librte_reorder.so.24.2 00:02:29.908 [257/268] Linking target lib/librte_compressdev.so.24.2 00:02:29.908 [258/268] Linking target lib/librte_cryptodev.so.24.2 00:02:29.908 [259/268] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:29.908 [260/268] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:29.908 [261/268] Linking target lib/librte_cmdline.so.24.2 00:02:29.908 [262/268] Linking target lib/librte_hash.so.24.2 00:02:29.908 [263/268] Linking target lib/librte_security.so.24.2 00:02:29.908 [264/268] Linking target lib/librte_ethdev.so.24.2 00:02:30.169 [265/268] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:30.169 [266/268] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:30.169 [267/268] Linking target lib/librte_power.so.24.2 00:02:30.169 [268/268] Linking target lib/librte_vhost.so.24.2 00:02:30.169 INFO: autodetecting backend as ninja 00:02:30.169 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:34.376 CC lib/log/log.o 00:02:34.376 CC lib/log/log_flags.o 00:02:34.376 CC lib/ut_mock/mock.o 00:02:34.376 CC lib/ut/ut.o 00:02:34.376 CC lib/log/log_deprecated.o 00:02:34.376 LIB libspdk_ut.a 00:02:34.376 LIB libspdk_ut_mock.a 00:02:34.376 SO libspdk_ut.so.2.0 00:02:34.376 LIB libspdk_log.a 00:02:34.376 SO libspdk_ut_mock.so.6.0 00:02:34.376 SO libspdk_log.so.7.1 00:02:34.376 SYMLINK libspdk_ut.so 00:02:34.376 SYMLINK libspdk_ut_mock.so 00:02:34.376 SYMLINK libspdk_log.so 00:02:34.636 CC lib/dma/dma.o 00:02:34.636 CC lib/util/base64.o 00:02:34.636 CC lib/util/bit_array.o 00:02:34.636 CC lib/ioat/ioat.o 00:02:34.636 CC lib/util/cpuset.o 00:02:34.636 CC lib/util/crc16.o 00:02:34.636 CC lib/util/crc32.o 00:02:34.636 CC lib/util/crc32c.o 00:02:34.636 CXX lib/trace_parser/trace.o 00:02:34.636 CC lib/util/crc32_ieee.o 00:02:34.636 CC lib/util/crc64.o 00:02:34.636 CC lib/util/dif.o 00:02:34.636 CC lib/util/fd.o 00:02:34.636 CC lib/util/fd_group.o 00:02:34.636 CC lib/util/file.o 00:02:34.636 CC lib/util/hexlify.o 00:02:34.636 CC lib/util/iov.o 00:02:34.636 CC lib/util/math.o 00:02:34.636 CC lib/util/net.o 00:02:34.636 CC lib/util/pipe.o 00:02:34.636 CC lib/util/strerror_tls.o 00:02:34.636 CC lib/util/string.o 00:02:34.636 CC lib/util/uuid.o 00:02:34.636 CC lib/util/xor.o 00:02:34.636 CC lib/util/zipf.o 00:02:34.636 CC lib/util/md5.o 00:02:34.897 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.897 CC lib/vfio_user/host/vfio_user.o 00:02:34.897 LIB libspdk_dma.a 00:02:34.897 SO libspdk_dma.so.5.0 00:02:34.897 LIB libspdk_ioat.a 00:02:34.897 SYMLINK libspdk_dma.so 00:02:35.158 SO libspdk_ioat.so.7.0 00:02:35.158 SYMLINK libspdk_ioat.so 00:02:35.158 LIB libspdk_vfio_user.a 00:02:35.158 SO libspdk_vfio_user.so.5.0 00:02:35.158 LIB libspdk_util.a 00:02:35.158 SYMLINK libspdk_vfio_user.so 00:02:35.420 SO libspdk_util.so.10.0 00:02:35.420 SYMLINK libspdk_util.so 00:02:35.420 LIB libspdk_trace_parser.a 00:02:35.681 SO libspdk_trace_parser.so.6.0 00:02:35.681 SYMLINK libspdk_trace_parser.so 00:02:35.681 CC lib/env_dpdk/env.o 00:02:35.681 CC lib/env_dpdk/memory.o 00:02:35.681 CC lib/idxd/idxd.o 00:02:35.681 CC lib/env_dpdk/pci.o 00:02:35.681 CC lib/idxd/idxd_user.o 00:02:35.681 CC lib/env_dpdk/init.o 00:02:35.681 CC lib/idxd/idxd_kernel.o 00:02:35.681 CC lib/env_dpdk/threads.o 00:02:35.681 CC lib/env_dpdk/pci_ioat.o 00:02:35.681 CC lib/conf/conf.o 00:02:35.681 CC lib/json/json_parse.o 00:02:35.681 CC lib/rdma_provider/common.o 00:02:35.681 CC lib/env_dpdk/pci_virtio.o 00:02:35.681 CC lib/rdma_utils/rdma_utils.o 00:02:35.681 CC lib/json/json_util.o 00:02:35.681 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:35.681 CC lib/vmd/vmd.o 00:02:35.681 CC lib/env_dpdk/pci_vmd.o 00:02:35.681 CC lib/json/json_write.o 00:02:35.681 CC lib/vmd/led.o 00:02:35.681 CC lib/env_dpdk/pci_idxd.o 00:02:35.681 CC lib/env_dpdk/pci_event.o 00:02:35.681 CC lib/env_dpdk/sigbus_handler.o 00:02:35.681 CC lib/env_dpdk/pci_dpdk.o 00:02:35.681 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.681 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.941 LIB libspdk_rdma_provider.a 00:02:36.202 SO libspdk_rdma_provider.so.6.0 00:02:36.202 LIB libspdk_conf.a 00:02:36.202 SO libspdk_conf.so.6.0 00:02:36.202 LIB libspdk_rdma_utils.a 00:02:36.202 LIB libspdk_json.a 00:02:36.202 SYMLINK libspdk_rdma_provider.so 00:02:36.202 SO libspdk_rdma_utils.so.1.0 00:02:36.202 SO libspdk_json.so.6.0 00:02:36.202 SYMLINK libspdk_conf.so 00:02:36.202 SYMLINK libspdk_rdma_utils.so 00:02:36.202 SYMLINK libspdk_json.so 00:02:36.463 LIB libspdk_idxd.a 00:02:36.463 SO libspdk_idxd.so.12.1 00:02:36.463 LIB libspdk_vmd.a 00:02:36.463 SO libspdk_vmd.so.6.0 00:02:36.463 SYMLINK libspdk_idxd.so 00:02:36.463 SYMLINK libspdk_vmd.so 00:02:36.724 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.724 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.724 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.724 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.984 LIB libspdk_jsonrpc.a 00:02:36.984 SO libspdk_jsonrpc.so.6.0 00:02:36.984 SYMLINK libspdk_jsonrpc.so 00:02:36.984 LIB libspdk_env_dpdk.a 00:02:37.246 SO libspdk_env_dpdk.so.15.0 00:02:37.246 SYMLINK libspdk_env_dpdk.so 00:02:37.246 CC lib/rpc/rpc.o 00:02:37.507 LIB libspdk_rpc.a 00:02:37.507 SO libspdk_rpc.so.6.0 00:02:37.768 SYMLINK libspdk_rpc.so 00:02:38.029 CC lib/trace/trace.o 00:02:38.029 CC lib/trace/trace_flags.o 00:02:38.029 CC lib/trace/trace_rpc.o 00:02:38.029 CC lib/notify/notify.o 00:02:38.029 CC lib/keyring/keyring.o 00:02:38.029 CC lib/notify/notify_rpc.o 00:02:38.029 CC lib/keyring/keyring_rpc.o 00:02:38.290 LIB libspdk_notify.a 00:02:38.290 SO libspdk_notify.so.6.0 00:02:38.290 LIB libspdk_keyring.a 00:02:38.290 LIB libspdk_trace.a 00:02:38.290 SO libspdk_keyring.so.2.0 00:02:38.290 SYMLINK libspdk_notify.so 00:02:38.290 SO libspdk_trace.so.11.0 00:02:38.290 SYMLINK libspdk_keyring.so 00:02:38.290 SYMLINK libspdk_trace.so 00:02:38.862 CC lib/thread/thread.o 00:02:38.862 CC lib/thread/iobuf.o 00:02:38.862 CC lib/sock/sock.o 00:02:38.862 CC lib/sock/sock_rpc.o 00:02:39.123 LIB libspdk_sock.a 00:02:39.123 SO libspdk_sock.so.10.0 00:02:39.123 SYMLINK libspdk_sock.so 00:02:39.732 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:39.732 CC lib/nvme/nvme_ctrlr.o 00:02:39.732 CC lib/nvme/nvme_fabric.o 00:02:39.732 CC lib/nvme/nvme_ns_cmd.o 00:02:39.732 CC lib/nvme/nvme_ns.o 00:02:39.732 CC lib/nvme/nvme_pcie_common.o 00:02:39.732 CC lib/nvme/nvme_pcie.o 00:02:39.732 CC lib/nvme/nvme_qpair.o 00:02:39.732 CC lib/nvme/nvme.o 00:02:39.732 CC lib/nvme/nvme_quirks.o 00:02:39.732 CC lib/nvme/nvme_transport.o 00:02:39.732 CC lib/nvme/nvme_discovery.o 00:02:39.732 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:39.732 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:39.732 CC lib/nvme/nvme_tcp.o 00:02:39.732 CC lib/nvme/nvme_opal.o 00:02:39.732 CC lib/nvme/nvme_io_msg.o 00:02:39.732 CC lib/nvme/nvme_poll_group.o 00:02:39.732 CC lib/nvme/nvme_zns.o 00:02:39.732 CC lib/nvme/nvme_stubs.o 00:02:39.732 CC lib/nvme/nvme_auth.o 00:02:39.732 CC lib/nvme/nvme_cuse.o 00:02:39.732 CC lib/nvme/nvme_vfio_user.o 00:02:39.732 CC lib/nvme/nvme_rdma.o 00:02:40.048 LIB libspdk_thread.a 00:02:40.048 SO libspdk_thread.so.10.2 00:02:40.355 SYMLINK libspdk_thread.so 00:02:40.678 CC lib/init/json_config.o 00:02:40.678 CC lib/fsdev/fsdev.o 00:02:40.678 CC lib/init/subsystem.o 00:02:40.678 CC lib/init/subsystem_rpc.o 00:02:40.678 CC lib/fsdev/fsdev_io.o 00:02:40.678 CC lib/init/rpc.o 00:02:40.678 CC lib/fsdev/fsdev_rpc.o 00:02:40.678 CC lib/accel/accel.o 00:02:40.678 CC lib/accel/accel_rpc.o 00:02:40.678 CC lib/virtio/virtio.o 00:02:40.678 CC lib/accel/accel_sw.o 00:02:40.678 CC lib/virtio/virtio_vhost_user.o 00:02:40.678 CC lib/virtio/virtio_vfio_user.o 00:02:40.678 CC lib/virtio/virtio_pci.o 00:02:40.678 CC lib/blob/blobstore.o 00:02:40.678 CC lib/blob/request.o 00:02:40.678 CC lib/blob/zeroes.o 00:02:40.678 CC lib/blob/blob_bs_dev.o 00:02:40.678 CC lib/vfu_tgt/tgt_endpoint.o 00:02:40.678 CC lib/vfu_tgt/tgt_rpc.o 00:02:40.981 LIB libspdk_init.a 00:02:40.981 SO libspdk_init.so.6.0 00:02:40.981 LIB libspdk_virtio.a 00:02:40.981 LIB libspdk_vfu_tgt.a 00:02:40.981 SYMLINK libspdk_init.so 00:02:40.981 SO libspdk_vfu_tgt.so.3.0 00:02:40.981 SO libspdk_virtio.so.7.0 00:02:40.981 SYMLINK libspdk_vfu_tgt.so 00:02:40.981 SYMLINK libspdk_virtio.so 00:02:41.242 LIB libspdk_fsdev.a 00:02:41.242 SO libspdk_fsdev.so.1.0 00:02:41.242 CC lib/event/app.o 00:02:41.242 CC lib/event/reactor.o 00:02:41.242 CC lib/event/log_rpc.o 00:02:41.242 CC lib/event/app_rpc.o 00:02:41.242 CC lib/event/scheduler_static.o 00:02:41.242 SYMLINK libspdk_fsdev.so 00:02:41.502 LIB libspdk_accel.a 00:02:41.502 LIB libspdk_nvme.a 00:02:41.502 SO libspdk_accel.so.16.0 00:02:41.762 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:41.762 SYMLINK libspdk_accel.so 00:02:41.762 SO libspdk_nvme.so.14.0 00:02:41.762 LIB libspdk_event.a 00:02:41.762 SO libspdk_event.so.15.0 00:02:41.762 SYMLINK libspdk_event.so 00:02:42.023 SYMLINK libspdk_nvme.so 00:02:42.023 CC lib/bdev/bdev.o 00:02:42.023 CC lib/bdev/bdev_rpc.o 00:02:42.023 CC lib/bdev/bdev_zone.o 00:02:42.023 CC lib/bdev/part.o 00:02:42.023 CC lib/bdev/scsi_nvme.o 00:02:42.283 LIB libspdk_fuse_dispatcher.a 00:02:42.283 SO libspdk_fuse_dispatcher.so.1.0 00:02:42.283 SYMLINK libspdk_fuse_dispatcher.so 00:02:43.225 LIB libspdk_blob.a 00:02:43.225 SO libspdk_blob.so.11.0 00:02:43.225 SYMLINK libspdk_blob.so 00:02:43.796 CC lib/blobfs/blobfs.o 00:02:43.796 CC lib/blobfs/tree.o 00:02:43.796 CC lib/lvol/lvol.o 00:02:44.368 LIB libspdk_bdev.a 00:02:44.368 SO libspdk_bdev.so.17.0 00:02:44.368 LIB libspdk_blobfs.a 00:02:44.368 SO libspdk_blobfs.so.10.0 00:02:44.368 SYMLINK libspdk_bdev.so 00:02:44.629 LIB libspdk_lvol.a 00:02:44.629 SYMLINK libspdk_blobfs.so 00:02:44.629 SO libspdk_lvol.so.10.0 00:02:44.629 SYMLINK libspdk_lvol.so 00:02:44.889 CC lib/nvmf/ctrlr.o 00:02:44.889 CC lib/nvmf/ctrlr_discovery.o 00:02:44.889 CC lib/nbd/nbd.o 00:02:44.889 CC lib/scsi/dev.o 00:02:44.889 CC lib/nvmf/ctrlr_bdev.o 00:02:44.889 CC lib/nbd/nbd_rpc.o 00:02:44.889 CC lib/scsi/lun.o 00:02:44.889 CC lib/nvmf/subsystem.o 00:02:44.889 CC lib/scsi/port.o 00:02:44.889 CC lib/nvmf/nvmf.o 00:02:44.889 CC lib/scsi/scsi.o 00:02:44.889 CC lib/nvmf/nvmf_rpc.o 00:02:44.889 CC lib/ftl/ftl_core.o 00:02:44.889 CC lib/scsi/scsi_bdev.o 00:02:44.889 CC lib/nvmf/transport.o 00:02:44.889 CC lib/ftl/ftl_init.o 00:02:44.889 CC lib/scsi/scsi_pr.o 00:02:44.889 CC lib/nvmf/tcp.o 00:02:44.889 CC lib/ftl/ftl_layout.o 00:02:44.889 CC lib/nvmf/stubs.o 00:02:44.889 CC lib/scsi/scsi_rpc.o 00:02:44.889 CC lib/ftl/ftl_debug.o 00:02:44.889 CC lib/scsi/task.o 00:02:44.889 CC lib/ublk/ublk.o 00:02:44.889 CC lib/nvmf/mdns_server.o 00:02:44.889 CC lib/ublk/ublk_rpc.o 00:02:44.889 CC lib/nvmf/vfio_user.o 00:02:44.889 CC lib/ftl/ftl_io.o 00:02:44.889 CC lib/nvmf/rdma.o 00:02:44.889 CC lib/ftl/ftl_sb.o 00:02:44.889 CC lib/nvmf/auth.o 00:02:44.889 CC lib/ftl/ftl_l2p.o 00:02:44.889 CC lib/ftl/ftl_l2p_flat.o 00:02:44.889 CC lib/ftl/ftl_nv_cache.o 00:02:44.889 CC lib/ftl/ftl_band.o 00:02:44.889 CC lib/ftl/ftl_band_ops.o 00:02:44.889 CC lib/ftl/ftl_writer.o 00:02:44.889 CC lib/ftl/ftl_rq.o 00:02:44.889 CC lib/ftl/ftl_reloc.o 00:02:44.889 CC lib/ftl/ftl_l2p_cache.o 00:02:44.889 CC lib/ftl/ftl_p2l.o 00:02:44.889 CC lib/ftl/ftl_p2l_log.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:44.889 CC lib/ftl/utils/ftl_conf.o 00:02:44.889 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:44.889 CC lib/ftl/utils/ftl_property.o 00:02:44.889 CC lib/ftl/utils/ftl_md.o 00:02:44.889 CC lib/ftl/utils/ftl_mempool.o 00:02:44.889 CC lib/ftl/utils/ftl_bitmap.o 00:02:44.889 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:44.889 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:44.889 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:44.889 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:44.889 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:44.889 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:44.889 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:44.889 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:44.889 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:44.889 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:44.889 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:44.889 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:44.889 CC lib/ftl/base/ftl_base_dev.o 00:02:44.889 CC lib/ftl/ftl_trace.o 00:02:44.889 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:44.889 CC lib/ftl/base/ftl_base_bdev.o 00:02:45.459 LIB libspdk_nbd.a 00:02:45.459 SO libspdk_nbd.so.7.0 00:02:45.459 LIB libspdk_scsi.a 00:02:45.459 SO libspdk_scsi.so.9.0 00:02:45.459 SYMLINK libspdk_nbd.so 00:02:45.720 SYMLINK libspdk_scsi.so 00:02:45.982 LIB libspdk_ublk.a 00:02:45.982 SO libspdk_ublk.so.3.0 00:02:45.982 CC lib/vhost/vhost.o 00:02:45.982 CC lib/vhost/vhost_rpc.o 00:02:45.982 CC lib/vhost/vhost_scsi.o 00:02:45.982 CC lib/vhost/vhost_blk.o 00:02:45.982 CC lib/vhost/rte_vhost_user.o 00:02:45.982 CC lib/iscsi/conn.o 00:02:45.982 CC lib/iscsi/init_grp.o 00:02:45.982 CC lib/iscsi/iscsi.o 00:02:45.982 CC lib/iscsi/param.o 00:02:45.982 CC lib/iscsi/portal_grp.o 00:02:45.982 CC lib/iscsi/tgt_node.o 00:02:45.982 CC lib/iscsi/iscsi_subsystem.o 00:02:45.982 CC lib/iscsi/iscsi_rpc.o 00:02:45.982 CC lib/iscsi/task.o 00:02:45.982 SYMLINK libspdk_ublk.so 00:02:46.243 LIB libspdk_ftl.a 00:02:46.243 SO libspdk_ftl.so.9.0 00:02:46.504 SYMLINK libspdk_ftl.so 00:02:47.075 LIB libspdk_vhost.a 00:02:47.075 SO libspdk_vhost.so.8.0 00:02:47.075 SYMLINK libspdk_vhost.so 00:02:47.075 LIB libspdk_nvmf.a 00:02:47.336 LIB libspdk_iscsi.a 00:02:47.336 SO libspdk_nvmf.so.19.0 00:02:47.336 SO libspdk_iscsi.so.8.0 00:02:47.336 SYMLINK libspdk_iscsi.so 00:02:47.604 SYMLINK libspdk_nvmf.so 00:02:48.176 CC module/env_dpdk/env_dpdk_rpc.o 00:02:48.176 CC module/vfu_device/vfu_virtio.o 00:02:48.176 CC module/vfu_device/vfu_virtio_blk.o 00:02:48.176 CC module/vfu_device/vfu_virtio_scsi.o 00:02:48.176 CC module/vfu_device/vfu_virtio_rpc.o 00:02:48.176 CC module/vfu_device/vfu_virtio_fs.o 00:02:48.176 LIB libspdk_env_dpdk_rpc.a 00:02:48.176 CC module/keyring/file/keyring_rpc.o 00:02:48.176 CC module/keyring/file/keyring.o 00:02:48.176 CC module/accel/dsa/accel_dsa.o 00:02:48.176 CC module/accel/dsa/accel_dsa_rpc.o 00:02:48.176 CC module/keyring/linux/keyring.o 00:02:48.176 CC module/scheduler/gscheduler/gscheduler.o 00:02:48.176 CC module/keyring/linux/keyring_rpc.o 00:02:48.176 CC module/sock/posix/posix.o 00:02:48.176 CC module/blob/bdev/blob_bdev.o 00:02:48.176 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:48.176 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.176 CC module/accel/error/accel_error.o 00:02:48.176 CC module/accel/iaa/accel_iaa.o 00:02:48.176 CC module/fsdev/aio/fsdev_aio.o 00:02:48.176 CC module/accel/error/accel_error_rpc.o 00:02:48.176 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:48.176 CC module/fsdev/aio/linux_aio_mgr.o 00:02:48.176 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:48.176 CC module/accel/ioat/accel_ioat.o 00:02:48.176 CC module/accel/ioat/accel_ioat_rpc.o 00:02:48.176 SO libspdk_env_dpdk_rpc.so.6.0 00:02:48.436 SYMLINK libspdk_env_dpdk_rpc.so 00:02:48.436 LIB libspdk_keyring_file.a 00:02:48.436 LIB libspdk_keyring_linux.a 00:02:48.436 LIB libspdk_scheduler_gscheduler.a 00:02:48.436 LIB libspdk_scheduler_dpdk_governor.a 00:02:48.436 SO libspdk_keyring_file.so.2.0 00:02:48.436 SO libspdk_keyring_linux.so.1.0 00:02:48.436 SO libspdk_scheduler_gscheduler.so.4.0 00:02:48.436 LIB libspdk_accel_error.a 00:02:48.436 LIB libspdk_accel_ioat.a 00:02:48.436 LIB libspdk_scheduler_dynamic.a 00:02:48.436 LIB libspdk_accel_iaa.a 00:02:48.436 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:48.436 SO libspdk_accel_error.so.2.0 00:02:48.436 SO libspdk_scheduler_dynamic.so.4.0 00:02:48.436 LIB libspdk_accel_dsa.a 00:02:48.436 SO libspdk_accel_ioat.so.6.0 00:02:48.436 SYMLINK libspdk_keyring_file.so 00:02:48.436 SYMLINK libspdk_keyring_linux.so 00:02:48.436 SO libspdk_accel_iaa.so.3.0 00:02:48.436 SYMLINK libspdk_scheduler_gscheduler.so 00:02:48.436 LIB libspdk_blob_bdev.a 00:02:48.436 SO libspdk_accel_dsa.so.5.0 00:02:48.696 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:48.696 SYMLINK libspdk_accel_error.so 00:02:48.696 SYMLINK libspdk_scheduler_dynamic.so 00:02:48.696 SO libspdk_blob_bdev.so.11.0 00:02:48.696 SYMLINK libspdk_accel_ioat.so 00:02:48.696 SYMLINK libspdk_accel_iaa.so 00:02:48.696 SYMLINK libspdk_accel_dsa.so 00:02:48.696 LIB libspdk_vfu_device.a 00:02:48.696 SYMLINK libspdk_blob_bdev.so 00:02:48.696 SO libspdk_vfu_device.so.3.0 00:02:48.696 SYMLINK libspdk_vfu_device.so 00:02:48.957 LIB libspdk_fsdev_aio.a 00:02:48.957 SO libspdk_fsdev_aio.so.1.0 00:02:48.957 LIB libspdk_sock_posix.a 00:02:48.957 SO libspdk_sock_posix.so.6.0 00:02:48.957 SYMLINK libspdk_fsdev_aio.so 00:02:49.218 SYMLINK libspdk_sock_posix.so 00:02:49.218 CC module/bdev/delay/vbdev_delay.o 00:02:49.218 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:49.218 CC module/bdev/error/vbdev_error.o 00:02:49.218 CC module/bdev/error/vbdev_error_rpc.o 00:02:49.218 CC module/bdev/null/bdev_null.o 00:02:49.218 CC module/bdev/gpt/gpt.o 00:02:49.218 CC module/bdev/null/bdev_null_rpc.o 00:02:49.218 CC module/bdev/gpt/vbdev_gpt.o 00:02:49.218 CC module/bdev/passthru/vbdev_passthru.o 00:02:49.218 CC module/bdev/malloc/bdev_malloc.o 00:02:49.218 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:49.218 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:49.218 CC module/bdev/lvol/vbdev_lvol.o 00:02:49.218 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:49.218 CC module/bdev/nvme/bdev_nvme.o 00:02:49.218 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.218 CC module/blobfs/bdev/blobfs_bdev.o 00:02:49.218 CC module/bdev/aio/bdev_aio.o 00:02:49.218 CC module/bdev/nvme/nvme_rpc.o 00:02:49.218 CC module/bdev/ftl/bdev_ftl.o 00:02:49.218 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:49.218 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.218 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.218 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.218 CC module/bdev/nvme/vbdev_opal.o 00:02:49.218 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:49.218 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.218 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.218 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.218 CC module/bdev/split/vbdev_split.o 00:02:49.218 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.218 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.218 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.218 CC module/bdev/split/vbdev_split_rpc.o 00:02:49.218 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.218 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.218 CC module/bdev/raid/bdev_raid.o 00:02:49.218 CC module/bdev/raid/bdev_raid_rpc.o 00:02:49.218 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.218 CC module/bdev/raid/raid0.o 00:02:49.218 CC module/bdev/raid/raid1.o 00:02:49.218 CC module/bdev/raid/concat.o 00:02:49.496 LIB libspdk_bdev_null.a 00:02:49.496 LIB libspdk_bdev_passthru.a 00:02:49.496 LIB libspdk_blobfs_bdev.a 00:02:49.758 SO libspdk_bdev_null.so.6.0 00:02:49.758 LIB libspdk_bdev_split.a 00:02:49.758 SO libspdk_bdev_passthru.so.6.0 00:02:49.758 SO libspdk_blobfs_bdev.so.6.0 00:02:49.758 LIB libspdk_bdev_aio.a 00:02:49.758 LIB libspdk_bdev_error.a 00:02:49.758 LIB libspdk_bdev_zone_block.a 00:02:49.758 LIB libspdk_bdev_delay.a 00:02:49.758 SO libspdk_bdev_split.so.6.0 00:02:49.758 LIB libspdk_bdev_gpt.a 00:02:49.758 LIB libspdk_bdev_malloc.a 00:02:49.758 SO libspdk_bdev_aio.so.6.0 00:02:49.758 SYMLINK libspdk_bdev_null.so 00:02:49.758 SO libspdk_bdev_error.so.6.0 00:02:49.758 LIB libspdk_bdev_ftl.a 00:02:49.758 SO libspdk_bdev_malloc.so.6.0 00:02:49.758 SO libspdk_bdev_zone_block.so.6.0 00:02:49.758 SYMLINK libspdk_bdev_passthru.so 00:02:49.758 SO libspdk_bdev_delay.so.6.0 00:02:49.758 SO libspdk_bdev_gpt.so.6.0 00:02:49.758 SYMLINK libspdk_blobfs_bdev.so 00:02:49.758 SO libspdk_bdev_ftl.so.6.0 00:02:49.758 SYMLINK libspdk_bdev_split.so 00:02:49.758 LIB libspdk_bdev_iscsi.a 00:02:49.758 SYMLINK libspdk_bdev_aio.so 00:02:49.758 SYMLINK libspdk_bdev_zone_block.so 00:02:49.758 SYMLINK libspdk_bdev_error.so 00:02:49.758 SYMLINK libspdk_bdev_malloc.so 00:02:49.758 SYMLINK libspdk_bdev_gpt.so 00:02:49.758 SYMLINK libspdk_bdev_delay.so 00:02:49.758 SO libspdk_bdev_iscsi.so.6.0 00:02:49.758 SYMLINK libspdk_bdev_ftl.so 00:02:49.758 LIB libspdk_bdev_lvol.a 00:02:49.758 LIB libspdk_bdev_virtio.a 00:02:49.758 SO libspdk_bdev_lvol.so.6.0 00:02:49.758 SO libspdk_bdev_virtio.so.6.0 00:02:49.758 SYMLINK libspdk_bdev_iscsi.so 00:02:50.019 SYMLINK libspdk_bdev_lvol.so 00:02:50.020 SYMLINK libspdk_bdev_virtio.so 00:02:50.281 LIB libspdk_bdev_raid.a 00:02:50.281 SO libspdk_bdev_raid.so.6.0 00:02:50.542 SYMLINK libspdk_bdev_raid.so 00:02:51.483 LIB libspdk_bdev_nvme.a 00:02:51.483 SO libspdk_bdev_nvme.so.7.0 00:02:51.744 SYMLINK libspdk_bdev_nvme.so 00:02:52.316 CC module/event/subsystems/vmd/vmd.o 00:02:52.316 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:52.316 CC module/event/subsystems/iobuf/iobuf.o 00:02:52.316 CC module/event/subsystems/sock/sock.o 00:02:52.316 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:52.316 CC module/event/subsystems/keyring/keyring.o 00:02:52.316 CC module/event/subsystems/scheduler/scheduler.o 00:02:52.316 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:52.316 CC module/event/subsystems/fsdev/fsdev.o 00:02:52.316 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:52.577 LIB libspdk_event_scheduler.a 00:02:52.577 LIB libspdk_event_keyring.a 00:02:52.577 LIB libspdk_event_sock.a 00:02:52.577 LIB libspdk_event_vmd.a 00:02:52.577 LIB libspdk_event_iobuf.a 00:02:52.577 LIB libspdk_event_vhost_blk.a 00:02:52.577 LIB libspdk_event_fsdev.a 00:02:52.577 LIB libspdk_event_vfu_tgt.a 00:02:52.577 SO libspdk_event_scheduler.so.4.0 00:02:52.577 SO libspdk_event_keyring.so.1.0 00:02:52.577 SO libspdk_event_sock.so.5.0 00:02:52.577 SO libspdk_event_vmd.so.6.0 00:02:52.577 SO libspdk_event_iobuf.so.3.0 00:02:52.577 SO libspdk_event_vhost_blk.so.3.0 00:02:52.577 SO libspdk_event_vfu_tgt.so.3.0 00:02:52.577 SO libspdk_event_fsdev.so.1.0 00:02:52.577 SYMLINK libspdk_event_keyring.so 00:02:52.577 SYMLINK libspdk_event_scheduler.so 00:02:52.577 SYMLINK libspdk_event_iobuf.so 00:02:52.577 SYMLINK libspdk_event_vfu_tgt.so 00:02:52.577 SYMLINK libspdk_event_sock.so 00:02:52.577 SYMLINK libspdk_event_vhost_blk.so 00:02:52.577 SYMLINK libspdk_event_vmd.so 00:02:52.577 SYMLINK libspdk_event_fsdev.so 00:02:53.150 CC module/event/subsystems/accel/accel.o 00:02:53.150 LIB libspdk_event_accel.a 00:02:53.150 SO libspdk_event_accel.so.6.0 00:02:53.150 SYMLINK libspdk_event_accel.so 00:02:53.723 CC module/event/subsystems/bdev/bdev.o 00:02:53.723 LIB libspdk_event_bdev.a 00:02:53.723 SO libspdk_event_bdev.so.6.0 00:02:53.985 SYMLINK libspdk_event_bdev.so 00:02:54.246 CC module/event/subsystems/scsi/scsi.o 00:02:54.246 CC module/event/subsystems/nbd/nbd.o 00:02:54.246 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:54.246 CC module/event/subsystems/ublk/ublk.o 00:02:54.246 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:54.507 LIB libspdk_event_ublk.a 00:02:54.507 LIB libspdk_event_nbd.a 00:02:54.507 LIB libspdk_event_scsi.a 00:02:54.507 SO libspdk_event_ublk.so.3.0 00:02:54.507 SO libspdk_event_nbd.so.6.0 00:02:54.507 SO libspdk_event_scsi.so.6.0 00:02:54.507 LIB libspdk_event_nvmf.a 00:02:54.507 SYMLINK libspdk_event_ublk.so 00:02:54.507 SYMLINK libspdk_event_nbd.so 00:02:54.507 SYMLINK libspdk_event_scsi.so 00:02:54.507 SO libspdk_event_nvmf.so.6.0 00:02:54.507 SYMLINK libspdk_event_nvmf.so 00:02:55.080 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:55.080 CC module/event/subsystems/iscsi/iscsi.o 00:02:55.080 LIB libspdk_event_vhost_scsi.a 00:02:55.080 SO libspdk_event_vhost_scsi.so.3.0 00:02:55.080 LIB libspdk_event_iscsi.a 00:02:55.080 SO libspdk_event_iscsi.so.6.0 00:02:55.080 SYMLINK libspdk_event_vhost_scsi.so 00:02:55.342 SYMLINK libspdk_event_iscsi.so 00:02:55.342 SO libspdk.so.6.0 00:02:55.342 SYMLINK libspdk.so 00:02:55.916 CXX app/trace/trace.o 00:02:55.916 CC app/spdk_top/spdk_top.o 00:02:55.916 CC app/trace_record/trace_record.o 00:02:55.916 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.916 CC app/spdk_nvme_identify/identify.o 00:02:55.916 CC app/spdk_lspci/spdk_lspci.o 00:02:55.916 CC test/rpc_client/rpc_client_test.o 00:02:55.916 CC app/spdk_nvme_perf/perf.o 00:02:55.916 TEST_HEADER include/spdk/accel.h 00:02:55.916 TEST_HEADER include/spdk/assert.h 00:02:55.916 TEST_HEADER include/spdk/accel_module.h 00:02:55.916 TEST_HEADER include/spdk/barrier.h 00:02:55.916 TEST_HEADER include/spdk/base64.h 00:02:55.916 TEST_HEADER include/spdk/bdev.h 00:02:55.916 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.916 TEST_HEADER include/spdk/bdev_module.h 00:02:55.916 TEST_HEADER include/spdk/bit_array.h 00:02:55.916 TEST_HEADER include/spdk/bit_pool.h 00:02:55.916 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.916 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.916 TEST_HEADER include/spdk/blobfs.h 00:02:55.916 TEST_HEADER include/spdk/blob.h 00:02:55.916 TEST_HEADER include/spdk/config.h 00:02:55.916 TEST_HEADER include/spdk/conf.h 00:02:55.916 TEST_HEADER include/spdk/cpuset.h 00:02:55.916 TEST_HEADER include/spdk/crc16.h 00:02:55.916 TEST_HEADER include/spdk/crc32.h 00:02:55.916 TEST_HEADER include/spdk/crc64.h 00:02:55.916 TEST_HEADER include/spdk/dif.h 00:02:55.916 TEST_HEADER include/spdk/dma.h 00:02:55.916 TEST_HEADER include/spdk/endian.h 00:02:55.916 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.916 TEST_HEADER include/spdk/env.h 00:02:55.916 CC app/spdk_dd/spdk_dd.o 00:02:55.916 TEST_HEADER include/spdk/fd_group.h 00:02:55.916 TEST_HEADER include/spdk/event.h 00:02:55.916 TEST_HEADER include/spdk/fd.h 00:02:55.916 TEST_HEADER include/spdk/file.h 00:02:55.916 TEST_HEADER include/spdk/fsdev_module.h 00:02:55.916 TEST_HEADER include/spdk/fsdev.h 00:02:55.916 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:55.916 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.916 TEST_HEADER include/spdk/ftl.h 00:02:55.916 CC app/nvmf_tgt/nvmf_main.o 00:02:55.916 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.916 CC app/iscsi_tgt/iscsi_tgt.o 00:02:55.916 TEST_HEADER include/spdk/hexlify.h 00:02:55.916 TEST_HEADER include/spdk/histogram_data.h 00:02:55.916 TEST_HEADER include/spdk/idxd.h 00:02:55.916 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.916 TEST_HEADER include/spdk/init.h 00:02:55.916 TEST_HEADER include/spdk/ioat.h 00:02:55.916 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.916 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.916 TEST_HEADER include/spdk/json.h 00:02:55.916 TEST_HEADER include/spdk/keyring.h 00:02:55.916 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.916 CC app/spdk_tgt/spdk_tgt.o 00:02:55.916 TEST_HEADER include/spdk/keyring_module.h 00:02:55.916 TEST_HEADER include/spdk/log.h 00:02:55.916 TEST_HEADER include/spdk/likely.h 00:02:55.916 TEST_HEADER include/spdk/lvol.h 00:02:55.916 TEST_HEADER include/spdk/mmio.h 00:02:55.916 TEST_HEADER include/spdk/md5.h 00:02:55.916 TEST_HEADER include/spdk/memory.h 00:02:55.916 TEST_HEADER include/spdk/nbd.h 00:02:55.916 TEST_HEADER include/spdk/net.h 00:02:55.916 TEST_HEADER include/spdk/notify.h 00:02:55.916 TEST_HEADER include/spdk/nvme.h 00:02:55.916 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.916 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.916 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.916 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.916 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.916 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.916 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.916 TEST_HEADER include/spdk/nvmf.h 00:02:55.916 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.916 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.916 TEST_HEADER include/spdk/opal.h 00:02:55.916 TEST_HEADER include/spdk/opal_spec.h 00:02:55.916 TEST_HEADER include/spdk/pci_ids.h 00:02:55.916 TEST_HEADER include/spdk/queue.h 00:02:55.916 TEST_HEADER include/spdk/pipe.h 00:02:55.916 TEST_HEADER include/spdk/reduce.h 00:02:55.916 TEST_HEADER include/spdk/rpc.h 00:02:55.916 TEST_HEADER include/spdk/scheduler.h 00:02:55.916 TEST_HEADER include/spdk/scsi.h 00:02:55.916 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.916 TEST_HEADER include/spdk/sock.h 00:02:55.916 TEST_HEADER include/spdk/stdinc.h 00:02:55.916 TEST_HEADER include/spdk/string.h 00:02:55.916 TEST_HEADER include/spdk/thread.h 00:02:55.916 TEST_HEADER include/spdk/trace.h 00:02:55.916 TEST_HEADER include/spdk/tree.h 00:02:55.916 TEST_HEADER include/spdk/trace_parser.h 00:02:55.916 TEST_HEADER include/spdk/util.h 00:02:55.916 TEST_HEADER include/spdk/ublk.h 00:02:55.916 TEST_HEADER include/spdk/uuid.h 00:02:55.916 TEST_HEADER include/spdk/version.h 00:02:55.917 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.917 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.917 TEST_HEADER include/spdk/vhost.h 00:02:55.917 TEST_HEADER include/spdk/xor.h 00:02:55.917 TEST_HEADER include/spdk/vmd.h 00:02:55.917 CXX test/cpp_headers/accel.o 00:02:55.917 TEST_HEADER include/spdk/zipf.h 00:02:55.917 CXX test/cpp_headers/assert.o 00:02:55.917 CXX test/cpp_headers/accel_module.o 00:02:55.917 CXX test/cpp_headers/barrier.o 00:02:55.917 CXX test/cpp_headers/bdev.o 00:02:55.917 CXX test/cpp_headers/base64.o 00:02:55.917 CXX test/cpp_headers/bdev_module.o 00:02:55.917 CXX test/cpp_headers/bit_pool.o 00:02:55.917 CXX test/cpp_headers/bit_array.o 00:02:55.917 CXX test/cpp_headers/bdev_zone.o 00:02:55.917 CXX test/cpp_headers/blobfs.o 00:02:55.917 CXX test/cpp_headers/blob_bdev.o 00:02:55.917 CXX test/cpp_headers/blobfs_bdev.o 00:02:55.917 CXX test/cpp_headers/blob.o 00:02:55.917 CXX test/cpp_headers/cpuset.o 00:02:55.917 CXX test/cpp_headers/config.o 00:02:55.917 CXX test/cpp_headers/conf.o 00:02:55.917 CXX test/cpp_headers/crc16.o 00:02:55.917 CXX test/cpp_headers/crc64.o 00:02:55.917 CXX test/cpp_headers/dma.o 00:02:55.917 CXX test/cpp_headers/crc32.o 00:02:55.917 CC examples/util/zipf/zipf.o 00:02:55.917 CC examples/ioat/verify/verify.o 00:02:55.917 CXX test/cpp_headers/endian.o 00:02:55.917 CXX test/cpp_headers/event.o 00:02:55.917 CXX test/cpp_headers/env.o 00:02:55.917 CXX test/cpp_headers/dif.o 00:02:55.917 CXX test/cpp_headers/env_dpdk.o 00:02:55.917 CXX test/cpp_headers/fd.o 00:02:55.917 CC examples/ioat/perf/perf.o 00:02:55.917 CXX test/cpp_headers/file.o 00:02:55.917 CXX test/cpp_headers/fsdev.o 00:02:55.917 CXX test/cpp_headers/fd_group.o 00:02:55.917 CXX test/cpp_headers/fsdev_module.o 00:02:55.917 CXX test/cpp_headers/fuse_dispatcher.o 00:02:55.917 CXX test/cpp_headers/gpt_spec.o 00:02:55.917 CXX test/cpp_headers/histogram_data.o 00:02:55.917 CXX test/cpp_headers/ftl.o 00:02:55.917 CXX test/cpp_headers/hexlify.o 00:02:55.917 CXX test/cpp_headers/idxd_spec.o 00:02:55.917 CXX test/cpp_headers/init.o 00:02:55.917 CXX test/cpp_headers/ioat.o 00:02:55.917 CXX test/cpp_headers/ioat_spec.o 00:02:55.917 CXX test/cpp_headers/iscsi_spec.o 00:02:55.917 CXX test/cpp_headers/jsonrpc.o 00:02:55.917 CXX test/cpp_headers/idxd.o 00:02:55.917 CXX test/cpp_headers/keyring_module.o 00:02:56.182 CC test/env/memory/memory_ut.o 00:02:56.182 CXX test/cpp_headers/log.o 00:02:56.182 CXX test/cpp_headers/json.o 00:02:56.182 CXX test/cpp_headers/md5.o 00:02:56.182 CXX test/cpp_headers/mmio.o 00:02:56.182 CXX test/cpp_headers/keyring.o 00:02:56.182 CXX test/cpp_headers/memory.o 00:02:56.182 CXX test/cpp_headers/net.o 00:02:56.182 CXX test/cpp_headers/nbd.o 00:02:56.182 CXX test/cpp_headers/nvme.o 00:02:56.182 CXX test/cpp_headers/notify.o 00:02:56.182 CXX test/cpp_headers/nvme_intel.o 00:02:56.182 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:56.182 CC test/app/jsoncat/jsoncat.o 00:02:56.182 CXX test/cpp_headers/nvme_zns.o 00:02:56.182 CXX test/cpp_headers/likely.o 00:02:56.182 CXX test/cpp_headers/nvmf_cmd.o 00:02:56.182 CC test/env/vtophys/vtophys.o 00:02:56.182 LINK spdk_lspci 00:02:56.182 CXX test/cpp_headers/nvmf.o 00:02:56.182 CXX test/cpp_headers/nvmf_transport.o 00:02:56.182 CXX test/cpp_headers/lvol.o 00:02:56.182 CXX test/cpp_headers/nvmf_spec.o 00:02:56.182 CXX test/cpp_headers/nvme_spec.o 00:02:56.182 CXX test/cpp_headers/pci_ids.o 00:02:56.182 CXX test/cpp_headers/opal_spec.o 00:02:56.182 CXX test/cpp_headers/queue.o 00:02:56.182 CXX test/cpp_headers/pipe.o 00:02:56.182 CXX test/cpp_headers/rpc.o 00:02:56.182 CXX test/cpp_headers/nvme_ocssd.o 00:02:56.182 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:56.182 CXX test/cpp_headers/scsi_spec.o 00:02:56.182 CXX test/cpp_headers/scheduler.o 00:02:56.182 CXX test/cpp_headers/opal.o 00:02:56.182 CXX test/cpp_headers/reduce.o 00:02:56.182 CXX test/cpp_headers/trace_parser.o 00:02:56.182 LINK spdk_nvme_discover 00:02:56.182 CXX test/cpp_headers/tree.o 00:02:56.182 CC test/dma/test_dma/test_dma.o 00:02:56.182 CXX test/cpp_headers/sock.o 00:02:56.182 CC test/app/histogram_perf/histogram_perf.o 00:02:56.182 CXX test/cpp_headers/scsi.o 00:02:56.182 CXX test/cpp_headers/thread.o 00:02:56.182 CXX test/cpp_headers/stdinc.o 00:02:56.182 CXX test/cpp_headers/string.o 00:02:56.182 CXX test/cpp_headers/trace.o 00:02:56.182 LINK nvmf_tgt 00:02:56.182 CC test/app/stub/stub.o 00:02:56.182 CXX test/cpp_headers/ublk.o 00:02:56.182 CXX test/cpp_headers/util.o 00:02:56.182 CXX test/cpp_headers/vfio_user_spec.o 00:02:56.182 CXX test/cpp_headers/uuid.o 00:02:56.182 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:56.182 CXX test/cpp_headers/version.o 00:02:56.182 CXX test/cpp_headers/xor.o 00:02:56.182 CXX test/cpp_headers/vfio_user_pci.o 00:02:56.182 CXX test/cpp_headers/vmd.o 00:02:56.182 CXX test/cpp_headers/vhost.o 00:02:56.182 LINK rpc_client_test 00:02:56.182 CXX test/cpp_headers/zipf.o 00:02:56.444 CC test/app/bdev_svc/bdev_svc.o 00:02:56.444 CC test/thread/poller_perf/poller_perf.o 00:02:56.444 CC test/env/pci/pci_ut.o 00:02:56.444 CC app/fio/nvme/fio_plugin.o 00:02:56.702 CC app/fio/bdev/fio_plugin.o 00:02:56.702 LINK spdk_trace 00:02:56.702 LINK verify 00:02:56.702 LINK jsoncat 00:02:56.702 LINK vtophys 00:02:56.702 LINK spdk_dd 00:02:56.962 LINK iscsi_tgt 00:02:56.962 LINK interrupt_tgt 00:02:56.962 LINK poller_perf 00:02:56.962 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:56.962 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:56.962 CC test/env/mem_callbacks/mem_callbacks.o 00:02:56.962 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:56.962 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:56.962 LINK spdk_tgt 00:02:57.222 LINK spdk_trace_record 00:02:57.223 LINK zipf 00:02:57.223 CC app/vhost/vhost.o 00:02:57.223 LINK test_dma 00:02:57.223 LINK ioat_perf 00:02:57.223 LINK spdk_nvme 00:02:57.223 LINK spdk_nvme_perf 00:02:57.223 LINK pci_ut 00:02:57.223 LINK nvme_fuzz 00:02:57.483 LINK vhost_fuzz 00:02:57.483 CC test/event/event_perf/event_perf.o 00:02:57.483 CC test/event/reactor/reactor.o 00:02:57.483 CC test/event/reactor_perf/reactor_perf.o 00:02:57.483 LINK vhost 00:02:57.483 CC test/event/app_repeat/app_repeat.o 00:02:57.483 CC test/event/scheduler/scheduler.o 00:02:57.483 LINK histogram_perf 00:02:57.483 LINK mem_callbacks 00:02:57.483 LINK env_dpdk_post_init 00:02:57.483 LINK stub 00:02:57.483 LINK bdev_svc 00:02:57.483 LINK reactor 00:02:57.743 LINK reactor_perf 00:02:57.743 LINK event_perf 00:02:57.743 CC examples/idxd/perf/perf.o 00:02:57.743 LINK app_repeat 00:02:57.743 CC examples/sock/hello_world/hello_sock.o 00:02:57.743 CC examples/vmd/lsvmd/lsvmd.o 00:02:57.743 CC examples/vmd/led/led.o 00:02:57.743 CC examples/thread/thread/thread_ex.o 00:02:57.743 LINK scheduler 00:02:58.004 CC test/nvme/aer/aer.o 00:02:58.005 CC test/nvme/overhead/overhead.o 00:02:58.005 CC test/nvme/e2edp/nvme_dp.o 00:02:58.005 CC test/nvme/reset/reset.o 00:02:58.005 CC test/nvme/startup/startup.o 00:02:58.005 LINK lsvmd 00:02:58.005 CC test/nvme/reserve/reserve.o 00:02:58.005 CC test/nvme/boot_partition/boot_partition.o 00:02:58.005 CC test/nvme/fdp/fdp.o 00:02:58.005 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:58.005 CC test/nvme/err_injection/err_injection.o 00:02:58.005 CC test/nvme/fused_ordering/fused_ordering.o 00:02:58.005 CC test/nvme/simple_copy/simple_copy.o 00:02:58.005 LINK spdk_bdev 00:02:58.005 CC test/nvme/sgl/sgl.o 00:02:58.005 LINK memory_ut 00:02:58.005 CC test/nvme/connect_stress/connect_stress.o 00:02:58.005 CC test/nvme/compliance/nvme_compliance.o 00:02:58.005 CC test/nvme/cuse/cuse.o 00:02:58.005 CC test/accel/dif/dif.o 00:02:58.005 CC test/blobfs/mkfs/mkfs.o 00:02:58.005 LINK led 00:02:58.005 LINK spdk_nvme_identify 00:02:58.005 LINK hello_sock 00:02:58.005 LINK spdk_top 00:02:58.005 LINK thread 00:02:58.005 CC test/lvol/esnap/esnap.o 00:02:58.005 LINK idxd_perf 00:02:58.005 LINK boot_partition 00:02:58.005 LINK startup 00:02:58.266 LINK doorbell_aers 00:02:58.266 LINK reserve 00:02:58.266 LINK connect_stress 00:02:58.266 LINK fused_ordering 00:02:58.266 LINK err_injection 00:02:58.266 LINK mkfs 00:02:58.266 LINK reset 00:02:58.266 LINK aer 00:02:58.266 LINK simple_copy 00:02:58.266 LINK sgl 00:02:58.266 LINK nvme_dp 00:02:58.266 LINK overhead 00:02:58.266 LINK fdp 00:02:58.266 LINK nvme_compliance 00:02:58.527 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:58.527 CC examples/nvme/arbitration/arbitration.o 00:02:58.527 CC examples/nvme/reconnect/reconnect.o 00:02:58.527 CC examples/nvme/hello_world/hello_world.o 00:02:58.527 CC examples/nvme/abort/abort.o 00:02:58.527 CC examples/nvme/hotplug/hotplug.o 00:02:58.527 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:58.527 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:58.527 LINK iscsi_fuzz 00:02:58.527 LINK dif 00:02:58.527 CC examples/accel/perf/accel_perf.o 00:02:58.527 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:58.787 CC examples/blob/hello_world/hello_blob.o 00:02:58.787 CC examples/blob/cli/blobcli.o 00:02:58.787 LINK cmb_copy 00:02:58.787 LINK pmr_persistence 00:02:58.787 LINK hello_world 00:02:58.787 LINK hotplug 00:02:58.787 LINK arbitration 00:02:58.787 LINK abort 00:02:58.787 LINK reconnect 00:02:59.048 LINK hello_fsdev 00:02:59.048 LINK hello_blob 00:02:59.048 LINK nvme_manage 00:02:59.048 LINK accel_perf 00:02:59.048 LINK cuse 00:02:59.048 LINK blobcli 00:02:59.309 CC test/bdev/bdevio/bdevio.o 00:02:59.570 LINK bdevio 00:02:59.570 CC examples/bdev/hello_world/hello_bdev.o 00:02:59.831 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.092 LINK hello_bdev 00:03:00.353 LINK bdevperf 00:03:00.925 CC examples/nvmf/nvmf/nvmf.o 00:03:01.498 LINK nvmf 00:03:02.438 LINK esnap 00:03:02.699 00:03:02.699 real 0m55.878s 00:03:02.699 user 8m4.342s 00:03:02.699 sys 5m28.337s 00:03:02.699 11:36:47 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:02.699 11:36:47 make -- common/autotest_common.sh@10 -- $ set +x 00:03:02.699 ************************************ 00:03:02.699 END TEST make 00:03:02.699 ************************************ 00:03:02.699 11:36:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:02.699 11:36:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:02.699 11:36:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:02.699 11:36:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.699 11:36:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:02.699 11:36:47 -- pm/common@44 -- $ pid=693904 00:03:02.699 11:36:47 -- pm/common@50 -- $ kill -TERM 693904 00:03:02.699 11:36:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.699 11:36:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:02.699 11:36:47 -- pm/common@44 -- $ pid=693905 00:03:02.699 11:36:47 -- pm/common@50 -- $ kill -TERM 693905 00:03:02.699 11:36:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.699 11:36:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:02.699 11:36:47 -- pm/common@44 -- $ pid=693907 00:03:02.699 11:36:47 -- pm/common@50 -- $ kill -TERM 693907 00:03:02.699 11:36:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.699 11:36:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:02.699 11:36:47 -- pm/common@44 -- $ pid=693931 00:03:02.699 11:36:47 -- pm/common@50 -- $ sudo -E kill -TERM 693931 00:03:02.961 11:36:47 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:02.961 11:36:47 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:02.961 11:36:47 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:02.961 11:36:47 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:02.961 11:36:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:02.961 11:36:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:02.961 11:36:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:02.961 11:36:47 -- scripts/common.sh@336 -- # IFS=.-: 00:03:02.961 11:36:47 -- scripts/common.sh@336 -- # read -ra ver1 00:03:02.961 11:36:47 -- scripts/common.sh@337 -- # IFS=.-: 00:03:02.961 11:36:47 -- scripts/common.sh@337 -- # read -ra ver2 00:03:02.961 11:36:47 -- scripts/common.sh@338 -- # local 'op=<' 00:03:02.961 11:36:47 -- scripts/common.sh@340 -- # ver1_l=2 00:03:02.961 11:36:47 -- scripts/common.sh@341 -- # ver2_l=1 00:03:02.961 11:36:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:02.961 11:36:47 -- scripts/common.sh@344 -- # case "$op" in 00:03:02.961 11:36:47 -- scripts/common.sh@345 -- # : 1 00:03:02.961 11:36:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:02.961 11:36:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.961 11:36:47 -- scripts/common.sh@365 -- # decimal 1 00:03:02.961 11:36:47 -- scripts/common.sh@353 -- # local d=1 00:03:02.961 11:36:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:02.961 11:36:47 -- scripts/common.sh@355 -- # echo 1 00:03:02.961 11:36:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:02.961 11:36:47 -- scripts/common.sh@366 -- # decimal 2 00:03:02.961 11:36:47 -- scripts/common.sh@353 -- # local d=2 00:03:02.961 11:36:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:02.961 11:36:47 -- scripts/common.sh@355 -- # echo 2 00:03:02.961 11:36:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:02.961 11:36:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:02.961 11:36:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:02.961 11:36:47 -- scripts/common.sh@368 -- # return 0 00:03:02.961 11:36:47 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:02.961 11:36:47 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:02.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.961 --rc genhtml_branch_coverage=1 00:03:02.961 --rc genhtml_function_coverage=1 00:03:02.961 --rc genhtml_legend=1 00:03:02.961 --rc geninfo_all_blocks=1 00:03:02.961 --rc geninfo_unexecuted_blocks=1 00:03:02.961 00:03:02.961 ' 00:03:02.961 11:36:47 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:02.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.961 --rc genhtml_branch_coverage=1 00:03:02.961 --rc genhtml_function_coverage=1 00:03:02.961 --rc genhtml_legend=1 00:03:02.961 --rc geninfo_all_blocks=1 00:03:02.961 --rc geninfo_unexecuted_blocks=1 00:03:02.961 00:03:02.961 ' 00:03:02.961 11:36:47 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:02.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.961 --rc genhtml_branch_coverage=1 00:03:02.961 --rc genhtml_function_coverage=1 00:03:02.961 --rc genhtml_legend=1 00:03:02.961 --rc geninfo_all_blocks=1 00:03:02.961 --rc geninfo_unexecuted_blocks=1 00:03:02.961 00:03:02.961 ' 00:03:02.961 11:36:47 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:02.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:02.961 --rc genhtml_branch_coverage=1 00:03:02.961 --rc genhtml_function_coverage=1 00:03:02.961 --rc genhtml_legend=1 00:03:02.961 --rc geninfo_all_blocks=1 00:03:02.961 --rc geninfo_unexecuted_blocks=1 00:03:02.961 00:03:02.961 ' 00:03:02.961 11:36:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:02.961 11:36:47 -- nvmf/common.sh@7 -- # uname -s 00:03:02.961 11:36:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:02.961 11:36:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:02.961 11:36:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:02.961 11:36:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:02.961 11:36:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:02.961 11:36:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:02.961 11:36:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:02.961 11:36:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:02.961 11:36:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:02.961 11:36:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:02.961 11:36:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:02.961 11:36:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:02.961 11:36:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:02.961 11:36:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:02.961 11:36:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:02.961 11:36:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:02.961 11:36:47 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:02.961 11:36:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:02.961 11:36:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:02.961 11:36:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.961 11:36:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.961 11:36:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.961 11:36:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.961 11:36:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.961 11:36:47 -- paths/export.sh@5 -- # export PATH 00:03:02.961 11:36:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.961 11:36:47 -- nvmf/common.sh@51 -- # : 0 00:03:02.961 11:36:47 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:02.961 11:36:47 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:02.961 11:36:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:02.961 11:36:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:02.961 11:36:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:02.961 11:36:47 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:02.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:02.961 11:36:47 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:02.961 11:36:47 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:02.961 11:36:47 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:02.961 11:36:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:02.961 11:36:47 -- spdk/autotest.sh@32 -- # uname -s 00:03:02.961 11:36:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:02.961 11:36:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:02.961 11:36:47 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:02.961 11:36:47 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:02.961 11:36:47 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:02.961 11:36:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:02.961 11:36:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:02.961 11:36:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:02.961 11:36:47 -- spdk/autotest.sh@48 -- # udevadm_pid=760086 00:03:02.961 11:36:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:02.961 11:36:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:02.961 11:36:47 -- pm/common@17 -- # local monitor 00:03:02.961 11:36:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.961 11:36:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.961 11:36:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.961 11:36:47 -- pm/common@21 -- # date +%s 00:03:02.961 11:36:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.961 11:36:47 -- pm/common@25 -- # sleep 1 00:03:02.961 11:36:47 -- pm/common@21 -- # date +%s 00:03:02.961 11:36:47 -- pm/common@21 -- # date +%s 00:03:02.961 11:36:47 -- pm/common@21 -- # date +%s 00:03:02.961 11:36:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728639407 00:03:02.961 11:36:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728639407 00:03:02.961 11:36:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728639407 00:03:02.961 11:36:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728639407 00:03:03.223 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728639407_collect-cpu-load.pm.log 00:03:03.223 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728639407_collect-vmstat.pm.log 00:03:03.223 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728639407_collect-cpu-temp.pm.log 00:03:03.223 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728639407_collect-bmc-pm.bmc.pm.log 00:03:04.168 11:36:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:04.168 11:36:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:04.168 11:36:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:04.168 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:03:04.168 11:36:48 -- spdk/autotest.sh@59 -- # create_test_list 00:03:04.168 11:36:48 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:04.168 11:36:48 -- common/autotest_common.sh@10 -- # set +x 00:03:04.168 11:36:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:04.168 11:36:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:04.168 11:36:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:04.168 11:36:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:04.168 11:36:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:04.168 11:36:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:04.168 11:36:48 -- common/autotest_common.sh@1455 -- # uname 00:03:04.168 11:36:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:04.168 11:36:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:04.168 11:36:48 -- common/autotest_common.sh@1475 -- # uname 00:03:04.168 11:36:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:04.168 11:36:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:04.168 11:36:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:04.168 lcov: LCOV version 1.15 00:03:04.168 11:36:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:19.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:19.085 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:37.215 11:37:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:37.215 11:37:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:37.215 11:37:18 -- common/autotest_common.sh@10 -- # set +x 00:03:37.215 11:37:18 -- spdk/autotest.sh@78 -- # rm -f 00:03:37.215 11:37:18 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.787 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:37.787 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:37.787 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:38.048 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:38.048 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:38.048 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:38.048 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:38.048 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:38.048 11:37:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:38.048 11:37:22 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:38.048 11:37:22 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:38.048 11:37:22 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:38.048 11:37:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:38.048 11:37:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:38.048 11:37:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:38.048 11:37:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.048 11:37:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:38.048 11:37:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:38.048 11:37:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:38.048 11:37:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:38.048 11:37:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:38.048 11:37:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:38.048 11:37:22 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:38.048 No valid GPT data, bailing 00:03:38.048 11:37:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:38.048 11:37:22 -- scripts/common.sh@394 -- # pt= 00:03:38.048 11:37:22 -- scripts/common.sh@395 -- # return 1 00:03:38.048 11:37:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:38.048 1+0 records in 00:03:38.048 1+0 records out 00:03:38.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044531 s, 235 MB/s 00:03:38.048 11:37:22 -- spdk/autotest.sh@105 -- # sync 00:03:38.048 11:37:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:38.048 11:37:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:38.048 11:37:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:48.089 11:37:31 -- spdk/autotest.sh@111 -- # uname -s 00:03:48.089 11:37:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:48.089 11:37:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:48.089 11:37:31 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:50.005 Hugepages 00:03:50.005 node hugesize free / total 00:03:50.005 node0 1048576kB 0 / 0 00:03:50.005 node0 2048kB 0 / 0 00:03:50.005 node1 1048576kB 0 / 0 00:03:50.005 node1 2048kB 0 / 0 00:03:50.005 00:03:50.005 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.266 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:50.266 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:50.266 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:50.266 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:50.266 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:50.266 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:50.266 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:50.266 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:50.266 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:50.266 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:50.266 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:50.266 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:50.266 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:50.266 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:50.266 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:50.266 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:50.266 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:50.266 11:37:34 -- spdk/autotest.sh@117 -- # uname -s 00:03:50.266 11:37:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:50.266 11:37:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:50.266 11:37:34 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.476 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:54.476 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:55.862 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:55.862 11:37:40 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:56.804 11:37:41 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:56.804 11:37:41 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:56.804 11:37:41 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:56.804 11:37:41 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:56.804 11:37:41 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:56.804 11:37:41 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:56.804 11:37:41 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.804 11:37:41 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.804 11:37:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:56.804 11:37:41 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:56.804 11:37:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:56.804 11:37:41 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.107 Waiting for block devices as requested 00:04:00.107 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:00.368 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:00.368 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:00.368 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:00.628 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:00.628 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:00.628 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:00.890 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:00.890 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:00.890 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:01.151 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:01.151 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:01.151 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:01.418 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:01.418 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:01.418 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:01.681 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:01.681 11:37:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:01.681 11:37:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:01.681 11:37:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:01.681 11:37:46 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:04:01.681 11:37:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:01.681 11:37:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:01.681 11:37:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:01.681 11:37:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:01.681 11:37:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:01.681 11:37:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:01.681 11:37:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:01.681 11:37:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:01.681 11:37:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:01.681 11:37:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:04:01.681 11:37:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:01.681 11:37:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:01.681 11:37:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:01.681 11:37:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:01.681 11:37:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:01.681 11:37:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:01.681 11:37:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:01.681 11:37:46 -- common/autotest_common.sh@1541 -- # continue 00:04:01.681 11:37:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:01.681 11:37:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.681 11:37:46 -- common/autotest_common.sh@10 -- # set +x 00:04:01.681 11:37:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:01.681 11:37:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.681 11:37:46 -- common/autotest_common.sh@10 -- # set +x 00:04:01.681 11:37:46 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.889 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:05.889 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:05.889 11:37:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:05.889 11:37:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.889 11:37:49 -- common/autotest_common.sh@10 -- # set +x 00:04:05.889 11:37:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:05.889 11:37:49 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:05.889 11:37:49 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.889 11:37:49 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:05.889 11:37:49 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:05.889 11:37:49 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:05.889 11:37:49 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:05.889 11:37:49 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:05.889 11:37:49 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:05.889 11:37:49 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:05.889 11:37:49 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.889 11:37:49 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:05.889 11:37:49 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:05.889 11:37:50 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:05.889 11:37:50 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:05.889 11:37:50 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:05.889 11:37:50 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:05.889 11:37:50 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:04:05.889 11:37:50 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:05.889 11:37:50 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:05.889 11:37:50 -- common/autotest_common.sh@1570 -- # return 0 00:04:05.889 11:37:50 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:05.889 11:37:50 -- common/autotest_common.sh@1578 -- # return 0 00:04:05.889 11:37:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:05.889 11:37:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:05.889 11:37:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.889 11:37:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:05.889 11:37:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:05.889 11:37:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:05.889 11:37:50 -- common/autotest_common.sh@10 -- # set +x 00:04:05.889 11:37:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:05.889 11:37:50 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:05.889 11:37:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.889 11:37:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.889 11:37:50 -- common/autotest_common.sh@10 -- # set +x 00:04:05.889 ************************************ 00:04:05.889 START TEST env 00:04:05.889 ************************************ 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:05.889 * Looking for test storage... 00:04:05.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.889 11:37:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.889 11:37:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.889 11:37:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.889 11:37:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.889 11:37:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.889 11:37:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.889 11:37:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.889 11:37:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.889 11:37:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.889 11:37:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.889 11:37:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.889 11:37:50 env -- scripts/common.sh@344 -- # case "$op" in 00:04:05.889 11:37:50 env -- scripts/common.sh@345 -- # : 1 00:04:05.889 11:37:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.889 11:37:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.889 11:37:50 env -- scripts/common.sh@365 -- # decimal 1 00:04:05.889 11:37:50 env -- scripts/common.sh@353 -- # local d=1 00:04:05.889 11:37:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.889 11:37:50 env -- scripts/common.sh@355 -- # echo 1 00:04:05.889 11:37:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.889 11:37:50 env -- scripts/common.sh@366 -- # decimal 2 00:04:05.889 11:37:50 env -- scripts/common.sh@353 -- # local d=2 00:04:05.889 11:37:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.889 11:37:50 env -- scripts/common.sh@355 -- # echo 2 00:04:05.889 11:37:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.889 11:37:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.889 11:37:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.889 11:37:50 env -- scripts/common.sh@368 -- # return 0 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.889 --rc genhtml_branch_coverage=1 00:04:05.889 --rc genhtml_function_coverage=1 00:04:05.889 --rc genhtml_legend=1 00:04:05.889 --rc geninfo_all_blocks=1 00:04:05.889 --rc geninfo_unexecuted_blocks=1 00:04:05.889 00:04:05.889 ' 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.889 --rc genhtml_branch_coverage=1 00:04:05.889 --rc genhtml_function_coverage=1 00:04:05.889 --rc genhtml_legend=1 00:04:05.889 --rc geninfo_all_blocks=1 00:04:05.889 --rc geninfo_unexecuted_blocks=1 00:04:05.889 00:04:05.889 ' 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.889 --rc genhtml_branch_coverage=1 00:04:05.889 --rc genhtml_function_coverage=1 00:04:05.889 --rc genhtml_legend=1 00:04:05.889 --rc geninfo_all_blocks=1 00:04:05.889 --rc geninfo_unexecuted_blocks=1 00:04:05.889 00:04:05.889 ' 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.889 --rc genhtml_branch_coverage=1 00:04:05.889 --rc genhtml_function_coverage=1 00:04:05.889 --rc genhtml_legend=1 00:04:05.889 --rc geninfo_all_blocks=1 00:04:05.889 --rc geninfo_unexecuted_blocks=1 00:04:05.889 00:04:05.889 ' 00:04:05.889 11:37:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.889 11:37:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.889 11:37:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.889 ************************************ 00:04:05.889 START TEST env_memory 00:04:05.889 ************************************ 00:04:05.889 11:37:50 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:05.889 00:04:05.889 00:04:05.889 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.890 http://cunit.sourceforge.net/ 00:04:05.890 00:04:05.890 00:04:05.890 Suite: memory 00:04:05.890 Test: alloc and free memory map ...[2024-10-11 11:37:50.378542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:05.890 passed 00:04:05.890 Test: mem map translation ...[2024-10-11 11:37:50.404036] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:05.890 [2024-10-11 11:37:50.404065] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:05.890 [2024-10-11 11:37:50.404113] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:05.890 [2024-10-11 11:37:50.404120] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:05.890 passed 00:04:05.890 Test: mem map registration ...[2024-10-11 11:37:50.459373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:05.890 [2024-10-11 11:37:50.459395] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:05.890 passed 00:04:06.152 Test: mem map adjacent registrations ...passed 00:04:06.152 00:04:06.152 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.152 suites 1 1 n/a 0 0 00:04:06.152 tests 4 4 4 0 0 00:04:06.152 asserts 152 152 152 0 n/a 00:04:06.152 00:04:06.152 Elapsed time = 0.192 seconds 00:04:06.152 00:04:06.152 real 0m0.207s 00:04:06.152 user 0m0.194s 00:04:06.152 sys 0m0.012s 00:04:06.152 11:37:50 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.152 11:37:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:06.152 ************************************ 00:04:06.152 END TEST env_memory 00:04:06.152 ************************************ 00:04:06.152 11:37:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:06.152 11:37:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.152 11:37:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.152 11:37:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.152 ************************************ 00:04:06.152 START TEST env_vtophys 00:04:06.152 ************************************ 00:04:06.152 11:37:50 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:06.152 EAL: lib.eal log level changed from notice to debug 00:04:06.152 EAL: Detected lcore 0 as core 0 on socket 0 00:04:06.152 EAL: Detected lcore 1 as core 1 on socket 0 00:04:06.152 EAL: Detected lcore 2 as core 2 on socket 0 00:04:06.152 EAL: Detected lcore 3 as core 3 on socket 0 00:04:06.152 EAL: Detected lcore 4 as core 4 on socket 0 00:04:06.152 EAL: Detected lcore 5 as core 5 on socket 0 00:04:06.152 EAL: Detected lcore 6 as core 6 on socket 0 00:04:06.152 EAL: Detected lcore 7 as core 7 on socket 0 00:04:06.152 EAL: Detected lcore 8 as core 8 on socket 0 00:04:06.152 EAL: Detected lcore 9 as core 9 on socket 0 00:04:06.152 EAL: Detected lcore 10 as core 10 on socket 0 00:04:06.152 EAL: Detected lcore 11 as core 11 on socket 0 00:04:06.152 EAL: Detected lcore 12 as core 12 on socket 0 00:04:06.152 EAL: Detected lcore 13 as core 13 on socket 0 00:04:06.152 EAL: Detected lcore 14 as core 14 on socket 0 00:04:06.152 EAL: Detected lcore 15 as core 15 on socket 0 00:04:06.152 EAL: Detected lcore 16 as core 16 on socket 0 00:04:06.152 EAL: Detected lcore 17 as core 17 on socket 0 00:04:06.152 EAL: Detected lcore 18 as core 18 on socket 0 00:04:06.152 EAL: Detected lcore 19 as core 19 on socket 0 00:04:06.152 EAL: Detected lcore 20 as core 20 on socket 0 00:04:06.152 EAL: Detected lcore 21 as core 21 on socket 0 00:04:06.152 EAL: Detected lcore 22 as core 22 on socket 0 00:04:06.152 EAL: Detected lcore 23 as core 23 on socket 0 00:04:06.152 EAL: Detected lcore 24 as core 24 on socket 0 00:04:06.152 EAL: Detected lcore 25 as core 25 on socket 0 00:04:06.152 EAL: Detected lcore 26 as core 26 on socket 0 00:04:06.152 EAL: Detected lcore 27 as core 27 on socket 0 00:04:06.152 EAL: Detected lcore 28 as core 28 on socket 0 00:04:06.152 EAL: Detected lcore 29 as core 29 on socket 0 00:04:06.152 EAL: Detected lcore 30 as core 30 on socket 0 00:04:06.152 EAL: Detected lcore 31 as core 31 on socket 0 00:04:06.152 EAL: Detected lcore 32 as core 32 on socket 0 00:04:06.152 EAL: Detected lcore 33 as core 33 on socket 0 00:04:06.152 EAL: Detected lcore 34 as core 34 on socket 0 00:04:06.152 EAL: Detected lcore 35 as core 35 on socket 0 00:04:06.152 EAL: Detected lcore 36 as core 0 on socket 1 00:04:06.152 EAL: Detected lcore 37 as core 1 on socket 1 00:04:06.152 EAL: Detected lcore 38 as core 2 on socket 1 00:04:06.152 EAL: Detected lcore 39 as core 3 on socket 1 00:04:06.152 EAL: Detected lcore 40 as core 4 on socket 1 00:04:06.152 EAL: Detected lcore 41 as core 5 on socket 1 00:04:06.152 EAL: Detected lcore 42 as core 6 on socket 1 00:04:06.152 EAL: Detected lcore 43 as core 7 on socket 1 00:04:06.152 EAL: Detected lcore 44 as core 8 on socket 1 00:04:06.152 EAL: Detected lcore 45 as core 9 on socket 1 00:04:06.152 EAL: Detected lcore 46 as core 10 on socket 1 00:04:06.152 EAL: Detected lcore 47 as core 11 on socket 1 00:04:06.152 EAL: Detected lcore 48 as core 12 on socket 1 00:04:06.152 EAL: Detected lcore 49 as core 13 on socket 1 00:04:06.152 EAL: Detected lcore 50 as core 14 on socket 1 00:04:06.152 EAL: Detected lcore 51 as core 15 on socket 1 00:04:06.152 EAL: Detected lcore 52 as core 16 on socket 1 00:04:06.152 EAL: Detected lcore 53 as core 17 on socket 1 00:04:06.152 EAL: Detected lcore 54 as core 18 on socket 1 00:04:06.152 EAL: Detected lcore 55 as core 19 on socket 1 00:04:06.153 EAL: Detected lcore 56 as core 20 on socket 1 00:04:06.153 EAL: Detected lcore 57 as core 21 on socket 1 00:04:06.153 EAL: Detected lcore 58 as core 22 on socket 1 00:04:06.153 EAL: Detected lcore 59 as core 23 on socket 1 00:04:06.153 EAL: Detected lcore 60 as core 24 on socket 1 00:04:06.153 EAL: Detected lcore 61 as core 25 on socket 1 00:04:06.153 EAL: Detected lcore 62 as core 26 on socket 1 00:04:06.153 EAL: Detected lcore 63 as core 27 on socket 1 00:04:06.153 EAL: Detected lcore 64 as core 28 on socket 1 00:04:06.153 EAL: Detected lcore 65 as core 29 on socket 1 00:04:06.153 EAL: Detected lcore 66 as core 30 on socket 1 00:04:06.153 EAL: Detected lcore 67 as core 31 on socket 1 00:04:06.153 EAL: Detected lcore 68 as core 32 on socket 1 00:04:06.153 EAL: Detected lcore 69 as core 33 on socket 1 00:04:06.153 EAL: Detected lcore 70 as core 34 on socket 1 00:04:06.153 EAL: Detected lcore 71 as core 35 on socket 1 00:04:06.153 EAL: Detected lcore 72 as core 0 on socket 0 00:04:06.153 EAL: Detected lcore 73 as core 1 on socket 0 00:04:06.153 EAL: Detected lcore 74 as core 2 on socket 0 00:04:06.153 EAL: Detected lcore 75 as core 3 on socket 0 00:04:06.153 EAL: Detected lcore 76 as core 4 on socket 0 00:04:06.153 EAL: Detected lcore 77 as core 5 on socket 0 00:04:06.153 EAL: Detected lcore 78 as core 6 on socket 0 00:04:06.153 EAL: Detected lcore 79 as core 7 on socket 0 00:04:06.153 EAL: Detected lcore 80 as core 8 on socket 0 00:04:06.153 EAL: Detected lcore 81 as core 9 on socket 0 00:04:06.153 EAL: Detected lcore 82 as core 10 on socket 0 00:04:06.153 EAL: Detected lcore 83 as core 11 on socket 0 00:04:06.153 EAL: Detected lcore 84 as core 12 on socket 0 00:04:06.153 EAL: Detected lcore 85 as core 13 on socket 0 00:04:06.153 EAL: Detected lcore 86 as core 14 on socket 0 00:04:06.153 EAL: Detected lcore 87 as core 15 on socket 0 00:04:06.153 EAL: Detected lcore 88 as core 16 on socket 0 00:04:06.153 EAL: Detected lcore 89 as core 17 on socket 0 00:04:06.153 EAL: Detected lcore 90 as core 18 on socket 0 00:04:06.153 EAL: Detected lcore 91 as core 19 on socket 0 00:04:06.153 EAL: Detected lcore 92 as core 20 on socket 0 00:04:06.153 EAL: Detected lcore 93 as core 21 on socket 0 00:04:06.153 EAL: Detected lcore 94 as core 22 on socket 0 00:04:06.153 EAL: Detected lcore 95 as core 23 on socket 0 00:04:06.153 EAL: Detected lcore 96 as core 24 on socket 0 00:04:06.153 EAL: Detected lcore 97 as core 25 on socket 0 00:04:06.153 EAL: Detected lcore 98 as core 26 on socket 0 00:04:06.153 EAL: Detected lcore 99 as core 27 on socket 0 00:04:06.153 EAL: Detected lcore 100 as core 28 on socket 0 00:04:06.153 EAL: Detected lcore 101 as core 29 on socket 0 00:04:06.153 EAL: Detected lcore 102 as core 30 on socket 0 00:04:06.153 EAL: Detected lcore 103 as core 31 on socket 0 00:04:06.153 EAL: Detected lcore 104 as core 32 on socket 0 00:04:06.153 EAL: Detected lcore 105 as core 33 on socket 0 00:04:06.153 EAL: Detected lcore 106 as core 34 on socket 0 00:04:06.153 EAL: Detected lcore 107 as core 35 on socket 0 00:04:06.153 EAL: Detected lcore 108 as core 0 on socket 1 00:04:06.153 EAL: Detected lcore 109 as core 1 on socket 1 00:04:06.153 EAL: Detected lcore 110 as core 2 on socket 1 00:04:06.153 EAL: Detected lcore 111 as core 3 on socket 1 00:04:06.153 EAL: Detected lcore 112 as core 4 on socket 1 00:04:06.153 EAL: Detected lcore 113 as core 5 on socket 1 00:04:06.153 EAL: Detected lcore 114 as core 6 on socket 1 00:04:06.153 EAL: Detected lcore 115 as core 7 on socket 1 00:04:06.153 EAL: Detected lcore 116 as core 8 on socket 1 00:04:06.153 EAL: Detected lcore 117 as core 9 on socket 1 00:04:06.153 EAL: Detected lcore 118 as core 10 on socket 1 00:04:06.153 EAL: Detected lcore 119 as core 11 on socket 1 00:04:06.153 EAL: Detected lcore 120 as core 12 on socket 1 00:04:06.153 EAL: Detected lcore 121 as core 13 on socket 1 00:04:06.153 EAL: Detected lcore 122 as core 14 on socket 1 00:04:06.153 EAL: Detected lcore 123 as core 15 on socket 1 00:04:06.153 EAL: Detected lcore 124 as core 16 on socket 1 00:04:06.153 EAL: Detected lcore 125 as core 17 on socket 1 00:04:06.153 EAL: Detected lcore 126 as core 18 on socket 1 00:04:06.153 EAL: Detected lcore 127 as core 19 on socket 1 00:04:06.153 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:06.153 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:06.153 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:06.153 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:06.153 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:06.153 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:06.153 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:06.153 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:06.153 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:06.153 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:06.153 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:06.153 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:06.153 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:06.153 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:06.153 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:06.153 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:06.153 EAL: Maximum logical cores by configuration: 128 00:04:06.153 EAL: Detected CPU lcores: 128 00:04:06.153 EAL: Detected NUMA nodes: 2 00:04:06.153 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:06.153 EAL: Detected shared linkage of DPDK 00:04:06.153 EAL: No shared files mode enabled, IPC will be disabled 00:04:06.153 EAL: Bus pci wants IOVA as 'DC' 00:04:06.153 EAL: Buses did not request a specific IOVA mode. 00:04:06.153 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:06.153 EAL: Selected IOVA mode 'VA' 00:04:06.153 EAL: Probing VFIO support... 00:04:06.153 EAL: IOMMU type 1 (Type 1) is supported 00:04:06.153 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:06.153 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:06.153 EAL: VFIO support initialized 00:04:06.153 EAL: Ask a virtual area of 0x2e000 bytes 00:04:06.153 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:06.153 EAL: Setting up physically contiguous memory... 00:04:06.153 EAL: Setting maximum number of open files to 524288 00:04:06.153 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:06.153 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:06.153 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:06.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.153 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:06.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.153 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:06.153 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:06.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.153 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:06.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.153 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:06.153 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:06.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.153 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:06.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.153 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:06.153 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:06.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.153 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:06.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.153 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:06.153 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:06.153 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:06.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.153 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:06.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.153 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:06.153 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:06.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.153 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:06.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.153 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:06.153 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:06.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.153 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:06.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.153 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:06.153 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:06.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.153 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:06.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.153 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:06.153 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:06.154 EAL: Hugepages will be freed exactly as allocated. 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: TSC frequency is ~2400000 KHz 00:04:06.154 EAL: Main lcore 0 is ready (tid=7efea85f6a00;cpuset=[0]) 00:04:06.154 EAL: Trying to obtain current memory policy. 00:04:06.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.154 EAL: Restoring previous memory policy: 0 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was expanded by 2MB 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Mem event callback 'spdk:(nil)' registered 00:04:06.154 00:04:06.154 00:04:06.154 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.154 http://cunit.sourceforge.net/ 00:04:06.154 00:04:06.154 00:04:06.154 Suite: components_suite 00:04:06.154 Test: vtophys_malloc_test ...passed 00:04:06.154 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:06.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.154 EAL: Restoring previous memory policy: 4 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was expanded by 4MB 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was shrunk by 4MB 00:04:06.154 EAL: Trying to obtain current memory policy. 00:04:06.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.154 EAL: Restoring previous memory policy: 4 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was expanded by 6MB 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was shrunk by 6MB 00:04:06.154 EAL: Trying to obtain current memory policy. 00:04:06.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.154 EAL: Restoring previous memory policy: 4 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was expanded by 10MB 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was shrunk by 10MB 00:04:06.154 EAL: Trying to obtain current memory policy. 00:04:06.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.154 EAL: Restoring previous memory policy: 4 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was expanded by 18MB 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was shrunk by 18MB 00:04:06.154 EAL: Trying to obtain current memory policy. 00:04:06.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.154 EAL: Restoring previous memory policy: 4 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was expanded by 34MB 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was shrunk by 34MB 00:04:06.154 EAL: Trying to obtain current memory policy. 00:04:06.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.154 EAL: Restoring previous memory policy: 4 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was expanded by 66MB 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was shrunk by 66MB 00:04:06.154 EAL: Trying to obtain current memory policy. 00:04:06.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.154 EAL: Restoring previous memory policy: 4 00:04:06.154 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.154 EAL: request: mp_malloc_sync 00:04:06.154 EAL: No shared files mode enabled, IPC is disabled 00:04:06.154 EAL: Heap on socket 0 was expanded by 130MB 00:04:06.414 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.414 EAL: request: mp_malloc_sync 00:04:06.414 EAL: No shared files mode enabled, IPC is disabled 00:04:06.414 EAL: Heap on socket 0 was shrunk by 130MB 00:04:06.414 EAL: Trying to obtain current memory policy. 00:04:06.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.414 EAL: Restoring previous memory policy: 4 00:04:06.414 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.414 EAL: request: mp_malloc_sync 00:04:06.414 EAL: No shared files mode enabled, IPC is disabled 00:04:06.414 EAL: Heap on socket 0 was expanded by 258MB 00:04:06.414 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.414 EAL: request: mp_malloc_sync 00:04:06.414 EAL: No shared files mode enabled, IPC is disabled 00:04:06.414 EAL: Heap on socket 0 was shrunk by 258MB 00:04:06.414 EAL: Trying to obtain current memory policy. 00:04:06.414 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.414 EAL: Restoring previous memory policy: 4 00:04:06.414 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.414 EAL: request: mp_malloc_sync 00:04:06.415 EAL: No shared files mode enabled, IPC is disabled 00:04:06.415 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.415 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.676 EAL: request: mp_malloc_sync 00:04:06.676 EAL: No shared files mode enabled, IPC is disabled 00:04:06.676 EAL: Heap on socket 0 was shrunk by 514MB 00:04:06.676 EAL: Trying to obtain current memory policy. 00:04:06.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.676 EAL: Restoring previous memory policy: 4 00:04:06.676 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.676 EAL: request: mp_malloc_sync 00:04:06.676 EAL: No shared files mode enabled, IPC is disabled 00:04:06.676 EAL: Heap on socket 0 was expanded by 1026MB 00:04:06.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.936 EAL: request: mp_malloc_sync 00:04:06.936 EAL: No shared files mode enabled, IPC is disabled 00:04:06.936 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:06.936 passed 00:04:06.936 00:04:06.936 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.936 suites 1 1 n/a 0 0 00:04:06.936 tests 2 2 2 0 0 00:04:06.936 asserts 497 497 497 0 n/a 00:04:06.936 00:04:06.936 Elapsed time = 0.692 seconds 00:04:06.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.936 EAL: request: mp_malloc_sync 00:04:06.936 EAL: No shared files mode enabled, IPC is disabled 00:04:06.936 EAL: Heap on socket 0 was shrunk by 2MB 00:04:06.936 EAL: No shared files mode enabled, IPC is disabled 00:04:06.936 EAL: No shared files mode enabled, IPC is disabled 00:04:06.936 EAL: No shared files mode enabled, IPC is disabled 00:04:06.936 00:04:06.936 real 0m0.826s 00:04:06.936 user 0m0.431s 00:04:06.936 sys 0m0.354s 00:04:06.936 11:37:51 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.936 11:37:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:06.936 ************************************ 00:04:06.936 END TEST env_vtophys 00:04:06.936 ************************************ 00:04:06.936 11:37:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:06.936 11:37:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.936 11:37:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.936 11:37:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.936 ************************************ 00:04:06.936 START TEST env_pci 00:04:06.936 ************************************ 00:04:06.936 11:37:51 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:06.936 00:04:06.936 00:04:06.936 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.936 http://cunit.sourceforge.net/ 00:04:06.936 00:04:06.936 00:04:06.936 Suite: pci 00:04:06.936 Test: pci_hook ...[2024-10-11 11:37:51.542937] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 779021 has claimed it 00:04:07.197 EAL: Cannot find device (10000:00:01.0) 00:04:07.198 EAL: Failed to attach device on primary process 00:04:07.198 passed 00:04:07.198 00:04:07.198 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.198 suites 1 1 n/a 0 0 00:04:07.198 tests 1 1 1 0 0 00:04:07.198 asserts 25 25 25 0 n/a 00:04:07.198 00:04:07.198 Elapsed time = 0.030 seconds 00:04:07.198 00:04:07.198 real 0m0.052s 00:04:07.198 user 0m0.015s 00:04:07.198 sys 0m0.036s 00:04:07.198 11:37:51 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.198 11:37:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:07.198 ************************************ 00:04:07.198 END TEST env_pci 00:04:07.198 ************************************ 00:04:07.198 11:37:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:07.198 11:37:51 env -- env/env.sh@15 -- # uname 00:04:07.198 11:37:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:07.198 11:37:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:07.198 11:37:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.198 11:37:51 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:07.198 11:37:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.198 11:37:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.198 ************************************ 00:04:07.198 START TEST env_dpdk_post_init 00:04:07.198 ************************************ 00:04:07.198 11:37:51 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.198 EAL: Detected CPU lcores: 128 00:04:07.198 EAL: Detected NUMA nodes: 2 00:04:07.198 EAL: Detected shared linkage of DPDK 00:04:07.198 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.198 EAL: Selected IOVA mode 'VA' 00:04:07.198 EAL: VFIO support initialized 00:04:07.198 EAL: Using IOMMU type 1 (Type 1) 00:04:11.405 Starting DPDK initialization... 00:04:11.405 Starting SPDK post initialization... 00:04:11.405 SPDK NVMe probe 00:04:11.405 Attaching to 0000:65:00.0 00:04:11.405 Attached to 0000:65:00.0 00:04:11.405 Cleaning up... 00:04:12.790 00:04:12.790 real 0m5.733s 00:04:12.790 user 0m0.181s 00:04:12.790 sys 0m0.107s 00:04:12.790 11:37:57 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.790 11:37:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.790 ************************************ 00:04:12.790 END TEST env_dpdk_post_init 00:04:12.790 ************************************ 00:04:13.051 11:37:57 env -- env/env.sh@26 -- # uname 00:04:13.051 11:37:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:13.051 11:37:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.051 11:37:57 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.051 11:37:57 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.051 11:37:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.051 ************************************ 00:04:13.051 START TEST env_mem_callbacks 00:04:13.051 ************************************ 00:04:13.051 11:37:57 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.051 EAL: Detected CPU lcores: 128 00:04:13.051 EAL: Detected NUMA nodes: 2 00:04:13.051 EAL: Detected shared linkage of DPDK 00:04:13.051 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.051 EAL: Selected IOVA mode 'VA' 00:04:13.051 EAL: VFIO support initialized 00:04:13.051 00:04:13.051 00:04:13.051 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.051 http://cunit.sourceforge.net/ 00:04:13.051 00:04:13.051 00:04:13.051 Suite: memory 00:04:13.051 Test: test ... 00:04:13.051 register 0x200000200000 2097152 00:04:13.051 malloc 3145728 00:04:13.051 register 0x200000400000 4194304 00:04:13.051 buf 0x200000500000 len 3145728 PASSED 00:04:13.051 malloc 64 00:04:13.051 buf 0x2000004fff40 len 64 PASSED 00:04:13.051 malloc 4194304 00:04:13.051 register 0x200000800000 6291456 00:04:13.051 buf 0x200000a00000 len 4194304 PASSED 00:04:13.051 free 0x200000500000 3145728 00:04:13.051 free 0x2000004fff40 64 00:04:13.051 unregister 0x200000400000 4194304 PASSED 00:04:13.051 free 0x200000a00000 4194304 00:04:13.051 unregister 0x200000800000 6291456 PASSED 00:04:13.051 malloc 8388608 00:04:13.051 register 0x200000400000 10485760 00:04:13.051 buf 0x200000600000 len 8388608 PASSED 00:04:13.051 free 0x200000600000 8388608 00:04:13.051 unregister 0x200000400000 10485760 PASSED 00:04:13.051 passed 00:04:13.051 00:04:13.051 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.051 suites 1 1 n/a 0 0 00:04:13.051 tests 1 1 1 0 0 00:04:13.051 asserts 15 15 15 0 n/a 00:04:13.051 00:04:13.051 Elapsed time = 0.010 seconds 00:04:13.051 00:04:13.051 real 0m0.070s 00:04:13.051 user 0m0.022s 00:04:13.051 sys 0m0.048s 00:04:13.051 11:37:57 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.051 11:37:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.051 ************************************ 00:04:13.051 END TEST env_mem_callbacks 00:04:13.051 ************************************ 00:04:13.051 00:04:13.051 real 0m7.510s 00:04:13.051 user 0m1.117s 00:04:13.051 sys 0m0.941s 00:04:13.051 11:37:57 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.051 11:37:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.051 ************************************ 00:04:13.051 END TEST env 00:04:13.051 ************************************ 00:04:13.051 11:37:57 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:13.051 11:37:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.051 11:37:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.051 11:37:57 -- common/autotest_common.sh@10 -- # set +x 00:04:13.051 ************************************ 00:04:13.051 START TEST rpc 00:04:13.051 ************************************ 00:04:13.313 11:37:57 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:13.313 * Looking for test storage... 00:04:13.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:13.313 11:37:57 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:13.313 11:37:57 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:13.313 11:37:57 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:13.313 11:37:57 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:13.313 11:37:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.313 11:37:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.313 11:37:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.313 11:37:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.313 11:37:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.313 11:37:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.313 11:37:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.313 11:37:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.313 11:37:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.313 11:37:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.313 11:37:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.313 11:37:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.313 11:37:57 rpc -- scripts/common.sh@345 -- # : 1 00:04:13.313 11:37:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.313 11:37:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.313 11:37:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.313 11:37:57 rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.313 11:37:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.313 11:37:57 rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.313 11:37:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.313 11:37:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.313 11:37:57 rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.313 11:37:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.313 11:37:57 rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.313 11:37:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.313 11:37:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.313 11:37:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.313 11:37:57 rpc -- scripts/common.sh@368 -- # return 0 00:04:13.313 11:37:57 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.313 11:37:57 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:13.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.313 --rc genhtml_branch_coverage=1 00:04:13.313 --rc genhtml_function_coverage=1 00:04:13.313 --rc genhtml_legend=1 00:04:13.313 --rc geninfo_all_blocks=1 00:04:13.313 --rc geninfo_unexecuted_blocks=1 00:04:13.313 00:04:13.313 ' 00:04:13.313 11:37:57 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:13.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.314 --rc genhtml_branch_coverage=1 00:04:13.314 --rc genhtml_function_coverage=1 00:04:13.314 --rc genhtml_legend=1 00:04:13.314 --rc geninfo_all_blocks=1 00:04:13.314 --rc geninfo_unexecuted_blocks=1 00:04:13.314 00:04:13.314 ' 00:04:13.314 11:37:57 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:13.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.314 --rc genhtml_branch_coverage=1 00:04:13.314 --rc genhtml_function_coverage=1 00:04:13.314 --rc genhtml_legend=1 00:04:13.314 --rc geninfo_all_blocks=1 00:04:13.314 --rc geninfo_unexecuted_blocks=1 00:04:13.314 00:04:13.314 ' 00:04:13.314 11:37:57 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:13.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.314 --rc genhtml_branch_coverage=1 00:04:13.314 --rc genhtml_function_coverage=1 00:04:13.314 --rc genhtml_legend=1 00:04:13.314 --rc geninfo_all_blocks=1 00:04:13.314 --rc geninfo_unexecuted_blocks=1 00:04:13.314 00:04:13.314 ' 00:04:13.314 11:37:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=780384 00:04:13.314 11:37:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.314 11:37:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 780384 00:04:13.314 11:37:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:13.314 11:37:57 rpc -- common/autotest_common.sh@831 -- # '[' -z 780384 ']' 00:04:13.314 11:37:57 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.314 11:37:57 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:13.314 11:37:57 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.314 11:37:57 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:13.314 11:37:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.575 [2024-10-11 11:37:57.946862] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:13.575 [2024-10-11 11:37:57.946931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780384 ] 00:04:13.575 [2024-10-11 11:37:58.029900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.575 [2024-10-11 11:37:58.082152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:13.575 [2024-10-11 11:37:58.082209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 780384' to capture a snapshot of events at runtime. 00:04:13.575 [2024-10-11 11:37:58.082218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:13.575 [2024-10-11 11:37:58.082226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:13.575 [2024-10-11 11:37:58.082232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid780384 for offline analysis/debug. 00:04:13.575 [2024-10-11 11:37:58.083141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.148 11:37:58 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:14.148 11:37:58 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:14.148 11:37:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.148 11:37:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.148 11:37:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:14.148 11:37:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:14.148 11:37:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.148 11:37:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.148 11:37:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.409 ************************************ 00:04:14.409 START TEST rpc_integrity 00:04:14.409 ************************************ 00:04:14.409 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:14.409 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.409 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.409 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.410 { 00:04:14.410 "name": "Malloc0", 00:04:14.410 "aliases": [ 00:04:14.410 "60ecf44b-624e-474c-a1f1-6a3974848e2e" 00:04:14.410 ], 00:04:14.410 "product_name": "Malloc disk", 00:04:14.410 "block_size": 512, 00:04:14.410 "num_blocks": 16384, 00:04:14.410 "uuid": "60ecf44b-624e-474c-a1f1-6a3974848e2e", 00:04:14.410 "assigned_rate_limits": { 00:04:14.410 "rw_ios_per_sec": 0, 00:04:14.410 "rw_mbytes_per_sec": 0, 00:04:14.410 "r_mbytes_per_sec": 0, 00:04:14.410 "w_mbytes_per_sec": 0 00:04:14.410 }, 00:04:14.410 "claimed": false, 00:04:14.410 "zoned": false, 00:04:14.410 "supported_io_types": { 00:04:14.410 "read": true, 00:04:14.410 "write": true, 00:04:14.410 "unmap": true, 00:04:14.410 "flush": true, 00:04:14.410 "reset": true, 00:04:14.410 "nvme_admin": false, 00:04:14.410 "nvme_io": false, 00:04:14.410 "nvme_io_md": false, 00:04:14.410 "write_zeroes": true, 00:04:14.410 "zcopy": true, 00:04:14.410 "get_zone_info": false, 00:04:14.410 "zone_management": false, 00:04:14.410 "zone_append": false, 00:04:14.410 "compare": false, 00:04:14.410 "compare_and_write": false, 00:04:14.410 "abort": true, 00:04:14.410 "seek_hole": false, 00:04:14.410 "seek_data": false, 00:04:14.410 "copy": true, 00:04:14.410 "nvme_iov_md": false 00:04:14.410 }, 00:04:14.410 "memory_domains": [ 00:04:14.410 { 00:04:14.410 "dma_device_id": "system", 00:04:14.410 "dma_device_type": 1 00:04:14.410 }, 00:04:14.410 { 00:04:14.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.410 "dma_device_type": 2 00:04:14.410 } 00:04:14.410 ], 00:04:14.410 "driver_specific": {} 00:04:14.410 } 00:04:14.410 ]' 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.410 [2024-10-11 11:37:58.950174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:14.410 [2024-10-11 11:37:58.950222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.410 [2024-10-11 11:37:58.950245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ad92a0 00:04:14.410 [2024-10-11 11:37:58.950254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.410 [2024-10-11 11:37:58.951836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.410 [2024-10-11 11:37:58.951871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.410 Passthru0 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.410 11:37:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.410 { 00:04:14.410 "name": "Malloc0", 00:04:14.410 "aliases": [ 00:04:14.410 "60ecf44b-624e-474c-a1f1-6a3974848e2e" 00:04:14.410 ], 00:04:14.410 "product_name": "Malloc disk", 00:04:14.410 "block_size": 512, 00:04:14.410 "num_blocks": 16384, 00:04:14.410 "uuid": "60ecf44b-624e-474c-a1f1-6a3974848e2e", 00:04:14.410 "assigned_rate_limits": { 00:04:14.410 "rw_ios_per_sec": 0, 00:04:14.410 "rw_mbytes_per_sec": 0, 00:04:14.410 "r_mbytes_per_sec": 0, 00:04:14.410 "w_mbytes_per_sec": 0 00:04:14.410 }, 00:04:14.410 "claimed": true, 00:04:14.410 "claim_type": "exclusive_write", 00:04:14.410 "zoned": false, 00:04:14.410 "supported_io_types": { 00:04:14.410 "read": true, 00:04:14.410 "write": true, 00:04:14.410 "unmap": true, 00:04:14.410 "flush": true, 00:04:14.410 "reset": true, 00:04:14.410 "nvme_admin": false, 00:04:14.410 "nvme_io": false, 00:04:14.410 "nvme_io_md": false, 00:04:14.410 "write_zeroes": true, 00:04:14.410 "zcopy": true, 00:04:14.410 "get_zone_info": false, 00:04:14.410 "zone_management": false, 00:04:14.410 "zone_append": false, 00:04:14.410 "compare": false, 00:04:14.410 "compare_and_write": false, 00:04:14.410 "abort": true, 00:04:14.410 "seek_hole": false, 00:04:14.410 "seek_data": false, 00:04:14.410 "copy": true, 00:04:14.410 "nvme_iov_md": false 00:04:14.410 }, 00:04:14.410 "memory_domains": [ 00:04:14.410 { 00:04:14.410 "dma_device_id": "system", 00:04:14.410 "dma_device_type": 1 00:04:14.410 }, 00:04:14.410 { 00:04:14.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.410 "dma_device_type": 2 00:04:14.410 } 00:04:14.410 ], 00:04:14.410 "driver_specific": {} 00:04:14.410 }, 00:04:14.410 { 00:04:14.410 "name": "Passthru0", 00:04:14.410 "aliases": [ 00:04:14.410 "9a67a611-0700-5ae2-9207-ed5b8768ccb5" 00:04:14.410 ], 00:04:14.410 "product_name": "passthru", 00:04:14.410 "block_size": 512, 00:04:14.410 "num_blocks": 16384, 00:04:14.410 "uuid": "9a67a611-0700-5ae2-9207-ed5b8768ccb5", 00:04:14.410 "assigned_rate_limits": { 00:04:14.410 "rw_ios_per_sec": 0, 00:04:14.410 "rw_mbytes_per_sec": 0, 00:04:14.410 "r_mbytes_per_sec": 0, 00:04:14.410 "w_mbytes_per_sec": 0 00:04:14.410 }, 00:04:14.410 "claimed": false, 00:04:14.410 "zoned": false, 00:04:14.410 "supported_io_types": { 00:04:14.410 "read": true, 00:04:14.410 "write": true, 00:04:14.410 "unmap": true, 00:04:14.410 "flush": true, 00:04:14.410 "reset": true, 00:04:14.410 "nvme_admin": false, 00:04:14.410 "nvme_io": false, 00:04:14.410 "nvme_io_md": false, 00:04:14.410 "write_zeroes": true, 00:04:14.410 "zcopy": true, 00:04:14.410 "get_zone_info": false, 00:04:14.410 "zone_management": false, 00:04:14.410 "zone_append": false, 00:04:14.410 "compare": false, 00:04:14.410 "compare_and_write": false, 00:04:14.410 "abort": true, 00:04:14.410 "seek_hole": false, 00:04:14.410 "seek_data": false, 00:04:14.410 "copy": true, 00:04:14.410 "nvme_iov_md": false 00:04:14.410 }, 00:04:14.410 "memory_domains": [ 00:04:14.410 { 00:04:14.410 "dma_device_id": "system", 00:04:14.410 "dma_device_type": 1 00:04:14.410 }, 00:04:14.410 { 00:04:14.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.410 "dma_device_type": 2 00:04:14.410 } 00:04:14.410 ], 00:04:14.410 "driver_specific": { 00:04:14.410 "passthru": { 00:04:14.410 "name": "Passthru0", 00:04:14.410 "base_bdev_name": "Malloc0" 00:04:14.410 } 00:04:14.410 } 00:04:14.410 } 00:04:14.410 ]' 00:04:14.410 11:37:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.410 11:37:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.410 11:37:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.410 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.410 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.672 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.672 11:37:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.672 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.672 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.672 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.672 11:37:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.672 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.672 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.672 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.672 11:37:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.672 11:37:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.672 11:37:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.672 00:04:14.672 real 0m0.308s 00:04:14.672 user 0m0.193s 00:04:14.672 sys 0m0.043s 00:04:14.672 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.672 11:37:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.672 ************************************ 00:04:14.672 END TEST rpc_integrity 00:04:14.672 ************************************ 00:04:14.672 11:37:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:14.672 11:37:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.672 11:37:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.672 11:37:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.672 ************************************ 00:04:14.672 START TEST rpc_plugins 00:04:14.672 ************************************ 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:14.672 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.672 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.672 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.672 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.672 { 00:04:14.672 "name": "Malloc1", 00:04:14.672 "aliases": [ 00:04:14.672 "a554e6d8-12cd-4165-ae01-1316bfacaa91" 00:04:14.672 ], 00:04:14.672 "product_name": "Malloc disk", 00:04:14.672 "block_size": 4096, 00:04:14.672 "num_blocks": 256, 00:04:14.672 "uuid": "a554e6d8-12cd-4165-ae01-1316bfacaa91", 00:04:14.672 "assigned_rate_limits": { 00:04:14.672 "rw_ios_per_sec": 0, 00:04:14.672 "rw_mbytes_per_sec": 0, 00:04:14.672 "r_mbytes_per_sec": 0, 00:04:14.672 "w_mbytes_per_sec": 0 00:04:14.672 }, 00:04:14.672 "claimed": false, 00:04:14.672 "zoned": false, 00:04:14.672 "supported_io_types": { 00:04:14.672 "read": true, 00:04:14.672 "write": true, 00:04:14.672 "unmap": true, 00:04:14.672 "flush": true, 00:04:14.672 "reset": true, 00:04:14.672 "nvme_admin": false, 00:04:14.672 "nvme_io": false, 00:04:14.672 "nvme_io_md": false, 00:04:14.672 "write_zeroes": true, 00:04:14.672 "zcopy": true, 00:04:14.672 "get_zone_info": false, 00:04:14.672 "zone_management": false, 00:04:14.672 "zone_append": false, 00:04:14.672 "compare": false, 00:04:14.672 "compare_and_write": false, 00:04:14.672 "abort": true, 00:04:14.672 "seek_hole": false, 00:04:14.672 "seek_data": false, 00:04:14.672 "copy": true, 00:04:14.672 "nvme_iov_md": false 00:04:14.672 }, 00:04:14.672 "memory_domains": [ 00:04:14.672 { 00:04:14.672 "dma_device_id": "system", 00:04:14.672 "dma_device_type": 1 00:04:14.672 }, 00:04:14.672 { 00:04:14.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.672 "dma_device_type": 2 00:04:14.672 } 00:04:14.672 ], 00:04:14.672 "driver_specific": {} 00:04:14.672 } 00:04:14.672 ]' 00:04:14.672 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.672 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.672 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.672 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.672 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.934 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.934 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.934 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:14.934 11:37:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:14.934 00:04:14.934 real 0m0.156s 00:04:14.934 user 0m0.098s 00:04:14.934 sys 0m0.023s 00:04:14.934 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.934 11:37:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.934 ************************************ 00:04:14.934 END TEST rpc_plugins 00:04:14.934 ************************************ 00:04:14.934 11:37:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:14.934 11:37:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.934 11:37:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.934 11:37:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.934 ************************************ 00:04:14.934 START TEST rpc_trace_cmd_test 00:04:14.934 ************************************ 00:04:14.934 11:37:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:14.934 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.934 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.934 11:37:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.934 11:37:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.934 11:37:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.934 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.934 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid780384", 00:04:14.934 "tpoint_group_mask": "0x8", 00:04:14.934 "iscsi_conn": { 00:04:14.934 "mask": "0x2", 00:04:14.934 "tpoint_mask": "0x0" 00:04:14.934 }, 00:04:14.934 "scsi": { 00:04:14.934 "mask": "0x4", 00:04:14.934 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "bdev": { 00:04:14.935 "mask": "0x8", 00:04:14.935 "tpoint_mask": "0xffffffffffffffff" 00:04:14.935 }, 00:04:14.935 "nvmf_rdma": { 00:04:14.935 "mask": "0x10", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "nvmf_tcp": { 00:04:14.935 "mask": "0x20", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "ftl": { 00:04:14.935 "mask": "0x40", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "blobfs": { 00:04:14.935 "mask": "0x80", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "dsa": { 00:04:14.935 "mask": "0x200", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "thread": { 00:04:14.935 "mask": "0x400", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "nvme_pcie": { 00:04:14.935 "mask": "0x800", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "iaa": { 00:04:14.935 "mask": "0x1000", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "nvme_tcp": { 00:04:14.935 "mask": "0x2000", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "bdev_nvme": { 00:04:14.935 "mask": "0x4000", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "sock": { 00:04:14.935 "mask": "0x8000", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "blob": { 00:04:14.935 "mask": "0x10000", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "bdev_raid": { 00:04:14.935 "mask": "0x20000", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 }, 00:04:14.935 "scheduler": { 00:04:14.935 "mask": "0x40000", 00:04:14.935 "tpoint_mask": "0x0" 00:04:14.935 } 00:04:14.935 }' 00:04:14.935 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.935 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:14.935 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.935 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.935 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:15.196 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:15.196 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:15.196 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:15.196 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:15.196 11:37:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:15.196 00:04:15.196 real 0m0.239s 00:04:15.196 user 0m0.193s 00:04:15.196 sys 0m0.033s 00:04:15.196 11:37:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.196 11:37:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.196 ************************************ 00:04:15.196 END TEST rpc_trace_cmd_test 00:04:15.196 ************************************ 00:04:15.196 11:37:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:15.196 11:37:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:15.196 11:37:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:15.196 11:37:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.196 11:37:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.196 11:37:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.196 ************************************ 00:04:15.196 START TEST rpc_daemon_integrity 00:04:15.196 ************************************ 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.196 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.458 { 00:04:15.458 "name": "Malloc2", 00:04:15.458 "aliases": [ 00:04:15.458 "fed709f5-0ec5-44ef-8f4d-fc039b3094b6" 00:04:15.458 ], 00:04:15.458 "product_name": "Malloc disk", 00:04:15.458 "block_size": 512, 00:04:15.458 "num_blocks": 16384, 00:04:15.458 "uuid": "fed709f5-0ec5-44ef-8f4d-fc039b3094b6", 00:04:15.458 "assigned_rate_limits": { 00:04:15.458 "rw_ios_per_sec": 0, 00:04:15.458 "rw_mbytes_per_sec": 0, 00:04:15.458 "r_mbytes_per_sec": 0, 00:04:15.458 "w_mbytes_per_sec": 0 00:04:15.458 }, 00:04:15.458 "claimed": false, 00:04:15.458 "zoned": false, 00:04:15.458 "supported_io_types": { 00:04:15.458 "read": true, 00:04:15.458 "write": true, 00:04:15.458 "unmap": true, 00:04:15.458 "flush": true, 00:04:15.458 "reset": true, 00:04:15.458 "nvme_admin": false, 00:04:15.458 "nvme_io": false, 00:04:15.458 "nvme_io_md": false, 00:04:15.458 "write_zeroes": true, 00:04:15.458 "zcopy": true, 00:04:15.458 "get_zone_info": false, 00:04:15.458 "zone_management": false, 00:04:15.458 "zone_append": false, 00:04:15.458 "compare": false, 00:04:15.458 "compare_and_write": false, 00:04:15.458 "abort": true, 00:04:15.458 "seek_hole": false, 00:04:15.458 "seek_data": false, 00:04:15.458 "copy": true, 00:04:15.458 "nvme_iov_md": false 00:04:15.458 }, 00:04:15.458 "memory_domains": [ 00:04:15.458 { 00:04:15.458 "dma_device_id": "system", 00:04:15.458 "dma_device_type": 1 00:04:15.458 }, 00:04:15.458 { 00:04:15.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.458 "dma_device_type": 2 00:04:15.458 } 00:04:15.458 ], 00:04:15.458 "driver_specific": {} 00:04:15.458 } 00:04:15.458 ]' 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.458 [2024-10-11 11:37:59.904767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:15.458 [2024-10-11 11:37:59.904811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.458 [2024-10-11 11:37:59.904827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0a590 00:04:15.458 [2024-10-11 11:37:59.904834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.458 [2024-10-11 11:37:59.906354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.458 [2024-10-11 11:37:59.906390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.458 Passthru0 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.458 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.458 { 00:04:15.458 "name": "Malloc2", 00:04:15.458 "aliases": [ 00:04:15.458 "fed709f5-0ec5-44ef-8f4d-fc039b3094b6" 00:04:15.458 ], 00:04:15.458 "product_name": "Malloc disk", 00:04:15.458 "block_size": 512, 00:04:15.458 "num_blocks": 16384, 00:04:15.458 "uuid": "fed709f5-0ec5-44ef-8f4d-fc039b3094b6", 00:04:15.458 "assigned_rate_limits": { 00:04:15.458 "rw_ios_per_sec": 0, 00:04:15.458 "rw_mbytes_per_sec": 0, 00:04:15.458 "r_mbytes_per_sec": 0, 00:04:15.458 "w_mbytes_per_sec": 0 00:04:15.458 }, 00:04:15.458 "claimed": true, 00:04:15.458 "claim_type": "exclusive_write", 00:04:15.458 "zoned": false, 00:04:15.458 "supported_io_types": { 00:04:15.458 "read": true, 00:04:15.458 "write": true, 00:04:15.458 "unmap": true, 00:04:15.458 "flush": true, 00:04:15.458 "reset": true, 00:04:15.458 "nvme_admin": false, 00:04:15.458 "nvme_io": false, 00:04:15.458 "nvme_io_md": false, 00:04:15.458 "write_zeroes": true, 00:04:15.458 "zcopy": true, 00:04:15.458 "get_zone_info": false, 00:04:15.458 "zone_management": false, 00:04:15.458 "zone_append": false, 00:04:15.458 "compare": false, 00:04:15.458 "compare_and_write": false, 00:04:15.458 "abort": true, 00:04:15.458 "seek_hole": false, 00:04:15.458 "seek_data": false, 00:04:15.458 "copy": true, 00:04:15.458 "nvme_iov_md": false 00:04:15.458 }, 00:04:15.458 "memory_domains": [ 00:04:15.458 { 00:04:15.458 "dma_device_id": "system", 00:04:15.458 "dma_device_type": 1 00:04:15.458 }, 00:04:15.458 { 00:04:15.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.458 "dma_device_type": 2 00:04:15.458 } 00:04:15.458 ], 00:04:15.458 "driver_specific": {} 00:04:15.458 }, 00:04:15.458 { 00:04:15.458 "name": "Passthru0", 00:04:15.458 "aliases": [ 00:04:15.458 "aefa9380-7166-500e-87c0-0ef28023c0e2" 00:04:15.458 ], 00:04:15.458 "product_name": "passthru", 00:04:15.458 "block_size": 512, 00:04:15.458 "num_blocks": 16384, 00:04:15.458 "uuid": "aefa9380-7166-500e-87c0-0ef28023c0e2", 00:04:15.458 "assigned_rate_limits": { 00:04:15.458 "rw_ios_per_sec": 0, 00:04:15.458 "rw_mbytes_per_sec": 0, 00:04:15.458 "r_mbytes_per_sec": 0, 00:04:15.458 "w_mbytes_per_sec": 0 00:04:15.458 }, 00:04:15.458 "claimed": false, 00:04:15.458 "zoned": false, 00:04:15.458 "supported_io_types": { 00:04:15.458 "read": true, 00:04:15.458 "write": true, 00:04:15.458 "unmap": true, 00:04:15.458 "flush": true, 00:04:15.458 "reset": true, 00:04:15.458 "nvme_admin": false, 00:04:15.458 "nvme_io": false, 00:04:15.458 "nvme_io_md": false, 00:04:15.458 "write_zeroes": true, 00:04:15.458 "zcopy": true, 00:04:15.458 "get_zone_info": false, 00:04:15.458 "zone_management": false, 00:04:15.458 "zone_append": false, 00:04:15.458 "compare": false, 00:04:15.458 "compare_and_write": false, 00:04:15.458 "abort": true, 00:04:15.458 "seek_hole": false, 00:04:15.458 "seek_data": false, 00:04:15.458 "copy": true, 00:04:15.458 "nvme_iov_md": false 00:04:15.458 }, 00:04:15.458 "memory_domains": [ 00:04:15.458 { 00:04:15.458 "dma_device_id": "system", 00:04:15.458 "dma_device_type": 1 00:04:15.458 }, 00:04:15.458 { 00:04:15.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.458 "dma_device_type": 2 00:04:15.458 } 00:04:15.458 ], 00:04:15.458 "driver_specific": { 00:04:15.458 "passthru": { 00:04:15.458 "name": "Passthru0", 00:04:15.458 "base_bdev_name": "Malloc2" 00:04:15.459 } 00:04:15.459 } 00:04:15.459 } 00:04:15.459 ]' 00:04:15.459 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.459 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.459 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.459 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.459 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.459 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.459 11:37:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:15.459 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.459 11:37:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.459 00:04:15.459 real 0m0.311s 00:04:15.459 user 0m0.201s 00:04:15.459 sys 0m0.041s 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.459 11:38:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.459 ************************************ 00:04:15.459 END TEST rpc_daemon_integrity 00:04:15.459 ************************************ 00:04:15.720 11:38:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:15.720 11:38:00 rpc -- rpc/rpc.sh@84 -- # killprocess 780384 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@950 -- # '[' -z 780384 ']' 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@954 -- # kill -0 780384 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@955 -- # uname 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 780384 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 780384' 00:04:15.720 killing process with pid 780384 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@969 -- # kill 780384 00:04:15.720 11:38:00 rpc -- common/autotest_common.sh@974 -- # wait 780384 00:04:15.982 00:04:15.982 real 0m2.730s 00:04:15.982 user 0m3.451s 00:04:15.982 sys 0m0.868s 00:04:15.982 11:38:00 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.982 11:38:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.982 ************************************ 00:04:15.982 END TEST rpc 00:04:15.982 ************************************ 00:04:15.982 11:38:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.982 11:38:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.982 11:38:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.982 11:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:15.982 ************************************ 00:04:15.982 START TEST skip_rpc 00:04:15.982 ************************************ 00:04:15.982 11:38:00 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.982 * Looking for test storage... 00:04:15.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.982 11:38:00 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:15.982 11:38:00 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:15.982 11:38:00 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:16.243 11:38:00 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.243 11:38:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:16.243 11:38:00 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.243 11:38:00 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:16.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.243 --rc genhtml_branch_coverage=1 00:04:16.243 --rc genhtml_function_coverage=1 00:04:16.243 --rc genhtml_legend=1 00:04:16.243 --rc geninfo_all_blocks=1 00:04:16.243 --rc geninfo_unexecuted_blocks=1 00:04:16.243 00:04:16.243 ' 00:04:16.243 11:38:00 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:16.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.243 --rc genhtml_branch_coverage=1 00:04:16.243 --rc genhtml_function_coverage=1 00:04:16.243 --rc genhtml_legend=1 00:04:16.243 --rc geninfo_all_blocks=1 00:04:16.243 --rc geninfo_unexecuted_blocks=1 00:04:16.243 00:04:16.243 ' 00:04:16.243 11:38:00 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:16.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.243 --rc genhtml_branch_coverage=1 00:04:16.243 --rc genhtml_function_coverage=1 00:04:16.243 --rc genhtml_legend=1 00:04:16.243 --rc geninfo_all_blocks=1 00:04:16.243 --rc geninfo_unexecuted_blocks=1 00:04:16.243 00:04:16.243 ' 00:04:16.243 11:38:00 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:16.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.243 --rc genhtml_branch_coverage=1 00:04:16.243 --rc genhtml_function_coverage=1 00:04:16.243 --rc genhtml_legend=1 00:04:16.243 --rc geninfo_all_blocks=1 00:04:16.243 --rc geninfo_unexecuted_blocks=1 00:04:16.243 00:04:16.243 ' 00:04:16.244 11:38:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.244 11:38:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:16.244 11:38:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:16.244 11:38:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.244 11:38:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.244 11:38:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.244 ************************************ 00:04:16.244 START TEST skip_rpc 00:04:16.244 ************************************ 00:04:16.244 11:38:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:16.244 11:38:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=781013 00:04:16.244 11:38:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.244 11:38:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:16.244 11:38:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.244 [2024-10-11 11:38:00.806218] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:16.244 [2024-10-11 11:38:00.806277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781013 ] 00:04:16.505 [2024-10-11 11:38:00.890166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.505 [2024-10-11 11:38:00.943844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 781013 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 781013 ']' 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 781013 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 781013 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 781013' 00:04:21.795 killing process with pid 781013 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 781013 00:04:21.795 11:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 781013 00:04:21.795 00:04:21.795 real 0m5.263s 00:04:21.795 user 0m5.017s 00:04:21.795 sys 0m0.290s 00:04:21.795 11:38:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.795 11:38:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.795 ************************************ 00:04:21.795 END TEST skip_rpc 00:04:21.795 ************************************ 00:04:21.795 11:38:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:21.795 11:38:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.795 11:38:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.795 11:38:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.795 ************************************ 00:04:21.795 START TEST skip_rpc_with_json 00:04:21.795 ************************************ 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=782093 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 782093 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 782093 ']' 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.795 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.795 [2024-10-11 11:38:06.140530] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:21.795 [2024-10-11 11:38:06.140585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782093 ] 00:04:21.795 [2024-10-11 11:38:06.219793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.795 [2024-10-11 11:38:06.253493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.366 [2024-10-11 11:38:06.928066] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:22.366 request: 00:04:22.366 { 00:04:22.366 "trtype": "tcp", 00:04:22.366 "method": "nvmf_get_transports", 00:04:22.366 "req_id": 1 00:04:22.366 } 00:04:22.366 Got JSON-RPC error response 00:04:22.366 response: 00:04:22.366 { 00:04:22.366 "code": -19, 00:04:22.366 "message": "No such device" 00:04:22.366 } 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.366 [2024-10-11 11:38:06.940164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.366 11:38:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.626 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.626 11:38:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:22.626 { 00:04:22.626 "subsystems": [ 00:04:22.626 { 00:04:22.626 "subsystem": "fsdev", 00:04:22.626 "config": [ 00:04:22.626 { 00:04:22.626 "method": "fsdev_set_opts", 00:04:22.626 "params": { 00:04:22.626 "fsdev_io_pool_size": 65535, 00:04:22.626 "fsdev_io_cache_size": 256 00:04:22.626 } 00:04:22.626 } 00:04:22.626 ] 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "subsystem": "vfio_user_target", 00:04:22.626 "config": null 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "subsystem": "keyring", 00:04:22.626 "config": [] 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "subsystem": "iobuf", 00:04:22.626 "config": [ 00:04:22.626 { 00:04:22.626 "method": "iobuf_set_options", 00:04:22.626 "params": { 00:04:22.626 "small_pool_count": 8192, 00:04:22.626 "large_pool_count": 1024, 00:04:22.626 "small_bufsize": 8192, 00:04:22.626 "large_bufsize": 135168 00:04:22.626 } 00:04:22.626 } 00:04:22.626 ] 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "subsystem": "sock", 00:04:22.626 "config": [ 00:04:22.626 { 00:04:22.626 "method": "sock_set_default_impl", 00:04:22.626 "params": { 00:04:22.626 "impl_name": "posix" 00:04:22.626 } 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "method": "sock_impl_set_options", 00:04:22.626 "params": { 00:04:22.626 "impl_name": "ssl", 00:04:22.626 "recv_buf_size": 4096, 00:04:22.626 "send_buf_size": 4096, 00:04:22.626 "enable_recv_pipe": true, 00:04:22.626 "enable_quickack": false, 00:04:22.626 "enable_placement_id": 0, 00:04:22.626 "enable_zerocopy_send_server": true, 00:04:22.626 "enable_zerocopy_send_client": false, 00:04:22.626 "zerocopy_threshold": 0, 00:04:22.626 "tls_version": 0, 00:04:22.626 "enable_ktls": false 00:04:22.626 } 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "method": "sock_impl_set_options", 00:04:22.626 "params": { 00:04:22.626 "impl_name": "posix", 00:04:22.626 "recv_buf_size": 2097152, 00:04:22.626 "send_buf_size": 2097152, 00:04:22.626 "enable_recv_pipe": true, 00:04:22.626 "enable_quickack": false, 00:04:22.626 "enable_placement_id": 0, 00:04:22.626 "enable_zerocopy_send_server": true, 00:04:22.626 "enable_zerocopy_send_client": false, 00:04:22.626 "zerocopy_threshold": 0, 00:04:22.626 "tls_version": 0, 00:04:22.626 "enable_ktls": false 00:04:22.626 } 00:04:22.626 } 00:04:22.626 ] 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "subsystem": "vmd", 00:04:22.626 "config": [] 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "subsystem": "accel", 00:04:22.626 "config": [ 00:04:22.626 { 00:04:22.626 "method": "accel_set_options", 00:04:22.626 "params": { 00:04:22.626 "small_cache_size": 128, 00:04:22.626 "large_cache_size": 16, 00:04:22.626 "task_count": 2048, 00:04:22.626 "sequence_count": 2048, 00:04:22.626 "buf_count": 2048 00:04:22.626 } 00:04:22.626 } 00:04:22.626 ] 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "subsystem": "bdev", 00:04:22.626 "config": [ 00:04:22.626 { 00:04:22.626 "method": "bdev_set_options", 00:04:22.626 "params": { 00:04:22.626 "bdev_io_pool_size": 65535, 00:04:22.626 "bdev_io_cache_size": 256, 00:04:22.626 "bdev_auto_examine": true, 00:04:22.626 "iobuf_small_cache_size": 128, 00:04:22.626 "iobuf_large_cache_size": 16 00:04:22.626 } 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "method": "bdev_raid_set_options", 00:04:22.626 "params": { 00:04:22.626 "process_window_size_kb": 1024, 00:04:22.626 "process_max_bandwidth_mb_sec": 0 00:04:22.626 } 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "method": "bdev_iscsi_set_options", 00:04:22.626 "params": { 00:04:22.626 "timeout_sec": 30 00:04:22.626 } 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "method": "bdev_nvme_set_options", 00:04:22.626 "params": { 00:04:22.626 "action_on_timeout": "none", 00:04:22.626 "timeout_us": 0, 00:04:22.626 "timeout_admin_us": 0, 00:04:22.626 "keep_alive_timeout_ms": 10000, 00:04:22.626 "arbitration_burst": 0, 00:04:22.626 "low_priority_weight": 0, 00:04:22.626 "medium_priority_weight": 0, 00:04:22.626 "high_priority_weight": 0, 00:04:22.626 "nvme_adminq_poll_period_us": 10000, 00:04:22.626 "nvme_ioq_poll_period_us": 0, 00:04:22.626 "io_queue_requests": 0, 00:04:22.626 "delay_cmd_submit": true, 00:04:22.626 "transport_retry_count": 4, 00:04:22.626 "bdev_retry_count": 3, 00:04:22.626 "transport_ack_timeout": 0, 00:04:22.626 "ctrlr_loss_timeout_sec": 0, 00:04:22.626 "reconnect_delay_sec": 0, 00:04:22.626 "fast_io_fail_timeout_sec": 0, 00:04:22.626 "disable_auto_failback": false, 00:04:22.626 "generate_uuids": false, 00:04:22.626 "transport_tos": 0, 00:04:22.626 "nvme_error_stat": false, 00:04:22.626 "rdma_srq_size": 0, 00:04:22.626 "io_path_stat": false, 00:04:22.626 "allow_accel_sequence": false, 00:04:22.626 "rdma_max_cq_size": 0, 00:04:22.626 "rdma_cm_event_timeout_ms": 0, 00:04:22.626 "dhchap_digests": [ 00:04:22.626 "sha256", 00:04:22.626 "sha384", 00:04:22.626 "sha512" 00:04:22.626 ], 00:04:22.626 "dhchap_dhgroups": [ 00:04:22.626 "null", 00:04:22.626 "ffdhe2048", 00:04:22.626 "ffdhe3072", 00:04:22.626 "ffdhe4096", 00:04:22.626 "ffdhe6144", 00:04:22.626 "ffdhe8192" 00:04:22.626 ] 00:04:22.626 } 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "method": "bdev_nvme_set_hotplug", 00:04:22.626 "params": { 00:04:22.626 "period_us": 100000, 00:04:22.626 "enable": false 00:04:22.626 } 00:04:22.626 }, 00:04:22.626 { 00:04:22.626 "method": "bdev_wait_for_examine" 00:04:22.626 } 00:04:22.627 ] 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "subsystem": "scsi", 00:04:22.627 "config": null 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "subsystem": "scheduler", 00:04:22.627 "config": [ 00:04:22.627 { 00:04:22.627 "method": "framework_set_scheduler", 00:04:22.627 "params": { 00:04:22.627 "name": "static" 00:04:22.627 } 00:04:22.627 } 00:04:22.627 ] 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "subsystem": "vhost_scsi", 00:04:22.627 "config": [] 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "subsystem": "vhost_blk", 00:04:22.627 "config": [] 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "subsystem": "ublk", 00:04:22.627 "config": [] 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "subsystem": "nbd", 00:04:22.627 "config": [] 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "subsystem": "nvmf", 00:04:22.627 "config": [ 00:04:22.627 { 00:04:22.627 "method": "nvmf_set_config", 00:04:22.627 "params": { 00:04:22.627 "discovery_filter": "match_any", 00:04:22.627 "admin_cmd_passthru": { 00:04:22.627 "identify_ctrlr": false 00:04:22.627 }, 00:04:22.627 "dhchap_digests": [ 00:04:22.627 "sha256", 00:04:22.627 "sha384", 00:04:22.627 "sha512" 00:04:22.627 ], 00:04:22.627 "dhchap_dhgroups": [ 00:04:22.627 "null", 00:04:22.627 "ffdhe2048", 00:04:22.627 "ffdhe3072", 00:04:22.627 "ffdhe4096", 00:04:22.627 "ffdhe6144", 00:04:22.627 "ffdhe8192" 00:04:22.627 ] 00:04:22.627 } 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "method": "nvmf_set_max_subsystems", 00:04:22.627 "params": { 00:04:22.627 "max_subsystems": 1024 00:04:22.627 } 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "method": "nvmf_set_crdt", 00:04:22.627 "params": { 00:04:22.627 "crdt1": 0, 00:04:22.627 "crdt2": 0, 00:04:22.627 "crdt3": 0 00:04:22.627 } 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "method": "nvmf_create_transport", 00:04:22.627 "params": { 00:04:22.627 "trtype": "TCP", 00:04:22.627 "max_queue_depth": 128, 00:04:22.627 "max_io_qpairs_per_ctrlr": 127, 00:04:22.627 "in_capsule_data_size": 4096, 00:04:22.627 "max_io_size": 131072, 00:04:22.627 "io_unit_size": 131072, 00:04:22.627 "max_aq_depth": 128, 00:04:22.627 "num_shared_buffers": 511, 00:04:22.627 "buf_cache_size": 4294967295, 00:04:22.627 "dif_insert_or_strip": false, 00:04:22.627 "zcopy": false, 00:04:22.627 "c2h_success": true, 00:04:22.627 "sock_priority": 0, 00:04:22.627 "abort_timeout_sec": 1, 00:04:22.627 "ack_timeout": 0, 00:04:22.627 "data_wr_pool_size": 0 00:04:22.627 } 00:04:22.627 } 00:04:22.627 ] 00:04:22.627 }, 00:04:22.627 { 00:04:22.627 "subsystem": "iscsi", 00:04:22.627 "config": [ 00:04:22.627 { 00:04:22.627 "method": "iscsi_set_options", 00:04:22.627 "params": { 00:04:22.627 "node_base": "iqn.2016-06.io.spdk", 00:04:22.627 "max_sessions": 128, 00:04:22.627 "max_connections_per_session": 2, 00:04:22.627 "max_queue_depth": 64, 00:04:22.627 "default_time2wait": 2, 00:04:22.627 "default_time2retain": 20, 00:04:22.627 "first_burst_length": 8192, 00:04:22.627 "immediate_data": true, 00:04:22.627 "allow_duplicated_isid": false, 00:04:22.627 "error_recovery_level": 0, 00:04:22.627 "nop_timeout": 60, 00:04:22.627 "nop_in_interval": 30, 00:04:22.627 "disable_chap": false, 00:04:22.627 "require_chap": false, 00:04:22.627 "mutual_chap": false, 00:04:22.627 "chap_group": 0, 00:04:22.627 "max_large_datain_per_connection": 64, 00:04:22.627 "max_r2t_per_connection": 4, 00:04:22.627 "pdu_pool_size": 36864, 00:04:22.627 "immediate_data_pool_size": 16384, 00:04:22.627 "data_out_pool_size": 2048 00:04:22.627 } 00:04:22.627 } 00:04:22.627 ] 00:04:22.627 } 00:04:22.627 ] 00:04:22.627 } 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 782093 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 782093 ']' 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 782093 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782093 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782093' 00:04:22.627 killing process with pid 782093 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 782093 00:04:22.627 11:38:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 782093 00:04:22.887 11:38:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=782387 00:04:22.887 11:38:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:22.887 11:38:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 782387 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 782387 ']' 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 782387 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 782387 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 782387' 00:04:28.174 killing process with pid 782387 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 782387 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 782387 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:28.174 00:04:28.174 real 0m6.542s 00:04:28.174 user 0m6.434s 00:04:28.174 sys 0m0.566s 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.174 ************************************ 00:04:28.174 END TEST skip_rpc_with_json 00:04:28.174 ************************************ 00:04:28.174 11:38:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:28.174 11:38:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.174 11:38:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.174 11:38:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.174 ************************************ 00:04:28.174 START TEST skip_rpc_with_delay 00:04:28.174 ************************************ 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.174 [2024-10-11 11:38:12.764940] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:28.174 00:04:28.174 real 0m0.079s 00:04:28.174 user 0m0.050s 00:04:28.174 sys 0m0.029s 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.174 11:38:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:28.174 ************************************ 00:04:28.174 END TEST skip_rpc_with_delay 00:04:28.174 ************************************ 00:04:28.436 11:38:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:28.436 11:38:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:28.436 11:38:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:28.436 11:38:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.436 11:38:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.436 11:38:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.436 ************************************ 00:04:28.436 START TEST exit_on_failed_rpc_init 00:04:28.436 ************************************ 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=783558 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 783558 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 783558 ']' 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.436 11:38:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.436 [2024-10-11 11:38:12.920768] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:28.436 [2024-10-11 11:38:12.920827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783558 ] 00:04:28.436 [2024-10-11 11:38:13.000361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.436 [2024-10-11 11:38:13.036565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.377 [2024-10-11 11:38:13.775531] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:29.377 [2024-10-11 11:38:13.775583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783793 ] 00:04:29.377 [2024-10-11 11:38:13.851699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.377 [2024-10-11 11:38:13.887473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.377 [2024-10-11 11:38:13.887524] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:29.377 [2024-10-11 11:38:13.887534] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:29.377 [2024-10-11 11:38:13.887541] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 783558 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 783558 ']' 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 783558 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 783558 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 783558' 00:04:29.377 killing process with pid 783558 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 783558 00:04:29.377 11:38:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 783558 00:04:29.638 00:04:29.638 real 0m1.311s 00:04:29.638 user 0m1.498s 00:04:29.638 sys 0m0.411s 00:04:29.638 11:38:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.638 11:38:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.638 ************************************ 00:04:29.638 END TEST exit_on_failed_rpc_init 00:04:29.638 ************************************ 00:04:29.638 11:38:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:29.638 00:04:29.638 real 0m13.721s 00:04:29.638 user 0m13.232s 00:04:29.638 sys 0m1.617s 00:04:29.638 11:38:14 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.638 11:38:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.638 ************************************ 00:04:29.638 END TEST skip_rpc 00:04:29.638 ************************************ 00:04:29.638 11:38:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:29.638 11:38:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.638 11:38:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.638 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:04:29.900 ************************************ 00:04:29.900 START TEST rpc_client 00:04:29.900 ************************************ 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:29.900 * Looking for test storage... 00:04:29.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.900 11:38:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.900 --rc genhtml_branch_coverage=1 00:04:29.900 --rc genhtml_function_coverage=1 00:04:29.900 --rc genhtml_legend=1 00:04:29.900 --rc geninfo_all_blocks=1 00:04:29.900 --rc geninfo_unexecuted_blocks=1 00:04:29.900 00:04:29.900 ' 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.900 --rc genhtml_branch_coverage=1 00:04:29.900 --rc genhtml_function_coverage=1 00:04:29.900 --rc genhtml_legend=1 00:04:29.900 --rc geninfo_all_blocks=1 00:04:29.900 --rc geninfo_unexecuted_blocks=1 00:04:29.900 00:04:29.900 ' 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.900 --rc genhtml_branch_coverage=1 00:04:29.900 --rc genhtml_function_coverage=1 00:04:29.900 --rc genhtml_legend=1 00:04:29.900 --rc geninfo_all_blocks=1 00:04:29.900 --rc geninfo_unexecuted_blocks=1 00:04:29.900 00:04:29.900 ' 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:29.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.900 --rc genhtml_branch_coverage=1 00:04:29.900 --rc genhtml_function_coverage=1 00:04:29.900 --rc genhtml_legend=1 00:04:29.900 --rc geninfo_all_blocks=1 00:04:29.900 --rc geninfo_unexecuted_blocks=1 00:04:29.900 00:04:29.900 ' 00:04:29.900 11:38:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:29.900 OK 00:04:29.900 11:38:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:29.900 00:04:29.900 real 0m0.225s 00:04:29.900 user 0m0.142s 00:04:29.900 sys 0m0.096s 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.900 11:38:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:29.900 ************************************ 00:04:29.900 END TEST rpc_client 00:04:29.900 ************************************ 00:04:30.162 11:38:14 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:30.162 11:38:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.162 11:38:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.162 11:38:14 -- common/autotest_common.sh@10 -- # set +x 00:04:30.162 ************************************ 00:04:30.162 START TEST json_config 00:04:30.162 ************************************ 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:30.162 11:38:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.162 11:38:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.162 11:38:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.162 11:38:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.162 11:38:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.162 11:38:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.162 11:38:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.162 11:38:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.162 11:38:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.162 11:38:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.162 11:38:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.162 11:38:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:30.162 11:38:14 json_config -- scripts/common.sh@345 -- # : 1 00:04:30.162 11:38:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.162 11:38:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.162 11:38:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:30.162 11:38:14 json_config -- scripts/common.sh@353 -- # local d=1 00:04:30.162 11:38:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.162 11:38:14 json_config -- scripts/common.sh@355 -- # echo 1 00:04:30.162 11:38:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.162 11:38:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:30.162 11:38:14 json_config -- scripts/common.sh@353 -- # local d=2 00:04:30.162 11:38:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.162 11:38:14 json_config -- scripts/common.sh@355 -- # echo 2 00:04:30.162 11:38:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.162 11:38:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.162 11:38:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.162 11:38:14 json_config -- scripts/common.sh@368 -- # return 0 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.162 --rc genhtml_branch_coverage=1 00:04:30.162 --rc genhtml_function_coverage=1 00:04:30.162 --rc genhtml_legend=1 00:04:30.162 --rc geninfo_all_blocks=1 00:04:30.162 --rc geninfo_unexecuted_blocks=1 00:04:30.162 00:04:30.162 ' 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.162 --rc genhtml_branch_coverage=1 00:04:30.162 --rc genhtml_function_coverage=1 00:04:30.162 --rc genhtml_legend=1 00:04:30.162 --rc geninfo_all_blocks=1 00:04:30.162 --rc geninfo_unexecuted_blocks=1 00:04:30.162 00:04:30.162 ' 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.162 --rc genhtml_branch_coverage=1 00:04:30.162 --rc genhtml_function_coverage=1 00:04:30.162 --rc genhtml_legend=1 00:04:30.162 --rc geninfo_all_blocks=1 00:04:30.162 --rc geninfo_unexecuted_blocks=1 00:04:30.162 00:04:30.162 ' 00:04:30.162 11:38:14 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.162 --rc genhtml_branch_coverage=1 00:04:30.162 --rc genhtml_function_coverage=1 00:04:30.162 --rc genhtml_legend=1 00:04:30.162 --rc geninfo_all_blocks=1 00:04:30.162 --rc geninfo_unexecuted_blocks=1 00:04:30.162 00:04:30.162 ' 00:04:30.162 11:38:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:30.162 11:38:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.423 11:38:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.423 11:38:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.423 11:38:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.423 11:38:14 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:30.423 11:38:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.423 11:38:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.423 11:38:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.423 11:38:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.423 11:38:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.423 11:38:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.423 11:38:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.423 11:38:14 json_config -- paths/export.sh@5 -- # export PATH 00:04:30.424 11:38:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@51 -- # : 0 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.424 11:38:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:30.424 INFO: JSON configuration test init 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.424 11:38:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:30.424 11:38:14 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.424 11:38:14 json_config -- json_config/common.sh@10 -- # shift 00:04:30.424 11:38:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.424 11:38:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.424 11:38:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.424 11:38:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.424 11:38:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.424 11:38:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=784154 00:04:30.424 11:38:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.424 Waiting for target to run... 00:04:30.424 11:38:14 json_config -- json_config/common.sh@25 -- # waitforlisten 784154 /var/tmp/spdk_tgt.sock 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@831 -- # '[' -z 784154 ']' 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.424 11:38:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.424 11:38:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.424 [2024-10-11 11:38:14.896916] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:30.424 [2024-10-11 11:38:14.896996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784154 ] 00:04:30.684 [2024-10-11 11:38:15.155848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.684 [2024-10-11 11:38:15.181816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.255 11:38:15 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.255 11:38:15 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:31.255 11:38:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.255 00:04:31.255 11:38:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:31.255 11:38:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:31.255 11:38:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.255 11:38:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.255 11:38:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:31.255 11:38:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:31.255 11:38:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.255 11:38:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.255 11:38:15 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:31.255 11:38:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:31.255 11:38:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:31.829 11:38:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.829 11:38:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:31.829 11:38:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@54 -- # sort 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:31.829 11:38:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:31.829 11:38:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.829 11:38:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:32.091 11:38:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.091 11:38:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.091 11:38:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.091 MallocForNvmf0 00:04:32.091 11:38:16 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.091 11:38:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.352 MallocForNvmf1 00:04:32.352 11:38:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.352 11:38:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.352 [2024-10-11 11:38:16.961615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.352 11:38:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:32.352 11:38:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:32.613 11:38:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:32.613 11:38:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:32.873 11:38:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:32.873 11:38:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:32.873 11:38:17 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:32.873 11:38:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:33.135 [2024-10-11 11:38:17.627671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:33.135 11:38:17 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:33.135 11:38:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:33.135 11:38:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.135 11:38:17 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:33.135 11:38:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:33.135 11:38:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.135 11:38:17 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:33.135 11:38:17 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:33.135 11:38:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:33.396 MallocBdevForConfigChangeCheck 00:04:33.396 11:38:17 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:33.396 11:38:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:33.396 11:38:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.396 11:38:17 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:33.396 11:38:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.657 11:38:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:33.657 INFO: shutting down applications... 00:04:33.657 11:38:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:33.657 11:38:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:33.657 11:38:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:33.657 11:38:18 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:34.228 Calling clear_iscsi_subsystem 00:04:34.228 Calling clear_nvmf_subsystem 00:04:34.228 Calling clear_nbd_subsystem 00:04:34.228 Calling clear_ublk_subsystem 00:04:34.228 Calling clear_vhost_blk_subsystem 00:04:34.228 Calling clear_vhost_scsi_subsystem 00:04:34.228 Calling clear_bdev_subsystem 00:04:34.228 11:38:18 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:34.228 11:38:18 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:34.228 11:38:18 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:34.228 11:38:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.228 11:38:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:34.228 11:38:18 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:34.489 11:38:19 json_config -- json_config/json_config.sh@352 -- # break 00:04:34.489 11:38:19 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:34.489 11:38:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:34.489 11:38:19 json_config -- json_config/common.sh@31 -- # local app=target 00:04:34.489 11:38:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.489 11:38:19 json_config -- json_config/common.sh@35 -- # [[ -n 784154 ]] 00:04:34.489 11:38:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 784154 00:04:34.489 11:38:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.489 11:38:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.489 11:38:19 json_config -- json_config/common.sh@41 -- # kill -0 784154 00:04:34.489 11:38:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.059 11:38:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.059 11:38:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.059 11:38:19 json_config -- json_config/common.sh@41 -- # kill -0 784154 00:04:35.059 11:38:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:35.059 11:38:19 json_config -- json_config/common.sh@43 -- # break 00:04:35.059 11:38:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:35.059 11:38:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:35.059 SPDK target shutdown done 00:04:35.059 11:38:19 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:35.059 INFO: relaunching applications... 00:04:35.059 11:38:19 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.059 11:38:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:35.059 11:38:19 json_config -- json_config/common.sh@10 -- # shift 00:04:35.059 11:38:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:35.059 11:38:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:35.059 11:38:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:35.059 11:38:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.059 11:38:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.059 11:38:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=785155 00:04:35.059 11:38:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:35.059 Waiting for target to run... 00:04:35.059 11:38:19 json_config -- json_config/common.sh@25 -- # waitforlisten 785155 /var/tmp/spdk_tgt.sock 00:04:35.059 11:38:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.059 11:38:19 json_config -- common/autotest_common.sh@831 -- # '[' -z 785155 ']' 00:04:35.059 11:38:19 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.059 11:38:19 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.059 11:38:19 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.059 11:38:19 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.059 11:38:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.059 [2024-10-11 11:38:19.669102] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:35.059 [2024-10-11 11:38:19.669169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785155 ] 00:04:35.320 [2024-10-11 11:38:19.936260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.580 [2024-10-11 11:38:19.963147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.840 [2024-10-11 11:38:20.460163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.101 [2024-10-11 11:38:20.492513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:36.101 11:38:20 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.101 11:38:20 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:36.101 11:38:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:36.101 00:04:36.101 11:38:20 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:36.101 11:38:20 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:36.101 INFO: Checking if target configuration is the same... 00:04:36.101 11:38:20 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.101 11:38:20 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:36.101 11:38:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.101 + '[' 2 -ne 2 ']' 00:04:36.101 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:36.101 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:36.101 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:36.101 +++ basename /dev/fd/62 00:04:36.101 ++ mktemp /tmp/62.XXX 00:04:36.101 + tmp_file_1=/tmp/62.UIt 00:04:36.101 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.101 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:36.101 + tmp_file_2=/tmp/spdk_tgt_config.json.YZB 00:04:36.101 + ret=0 00:04:36.101 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.362 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.362 + diff -u /tmp/62.UIt /tmp/spdk_tgt_config.json.YZB 00:04:36.362 + echo 'INFO: JSON config files are the same' 00:04:36.362 INFO: JSON config files are the same 00:04:36.362 + rm /tmp/62.UIt /tmp/spdk_tgt_config.json.YZB 00:04:36.362 + exit 0 00:04:36.362 11:38:20 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:36.362 11:38:20 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:36.362 INFO: changing configuration and checking if this can be detected... 00:04:36.362 11:38:20 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:36.362 11:38:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:36.622 11:38:21 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.622 11:38:21 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:36.622 11:38:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.622 + '[' 2 -ne 2 ']' 00:04:36.622 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:36.622 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:36.622 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:36.622 +++ basename /dev/fd/62 00:04:36.622 ++ mktemp /tmp/62.XXX 00:04:36.623 + tmp_file_1=/tmp/62.zvP 00:04:36.623 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.623 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:36.623 + tmp_file_2=/tmp/spdk_tgt_config.json.dTm 00:04:36.623 + ret=0 00:04:36.623 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.883 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.883 + diff -u /tmp/62.zvP /tmp/spdk_tgt_config.json.dTm 00:04:36.883 + ret=1 00:04:36.883 + echo '=== Start of file: /tmp/62.zvP ===' 00:04:36.883 + cat /tmp/62.zvP 00:04:36.883 + echo '=== End of file: /tmp/62.zvP ===' 00:04:36.883 + echo '' 00:04:36.883 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dTm ===' 00:04:36.883 + cat /tmp/spdk_tgt_config.json.dTm 00:04:36.883 + echo '=== End of file: /tmp/spdk_tgt_config.json.dTm ===' 00:04:36.883 + echo '' 00:04:36.883 + rm /tmp/62.zvP /tmp/spdk_tgt_config.json.dTm 00:04:36.883 + exit 1 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:36.883 INFO: configuration change detected. 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:36.883 11:38:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.883 11:38:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@324 -- # [[ -n 785155 ]] 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:36.883 11:38:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.883 11:38:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:36.883 11:38:21 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:36.883 11:38:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.883 11:38:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.144 11:38:21 json_config -- json_config/json_config.sh@330 -- # killprocess 785155 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@950 -- # '[' -z 785155 ']' 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@954 -- # kill -0 785155 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@955 -- # uname 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 785155 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 785155' 00:04:37.144 killing process with pid 785155 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@969 -- # kill 785155 00:04:37.144 11:38:21 json_config -- common/autotest_common.sh@974 -- # wait 785155 00:04:37.405 11:38:21 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.405 11:38:21 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:37.405 11:38:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.405 11:38:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.405 11:38:21 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:37.405 11:38:21 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:37.405 INFO: Success 00:04:37.405 00:04:37.405 real 0m7.302s 00:04:37.405 user 0m9.058s 00:04:37.405 sys 0m1.662s 00:04:37.405 11:38:21 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.405 11:38:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.405 ************************************ 00:04:37.405 END TEST json_config 00:04:37.405 ************************************ 00:04:37.405 11:38:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:37.405 11:38:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.405 11:38:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.405 11:38:21 -- common/autotest_common.sh@10 -- # set +x 00:04:37.405 ************************************ 00:04:37.405 START TEST json_config_extra_key 00:04:37.405 ************************************ 00:04:37.405 11:38:21 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:37.667 11:38:22 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.667 11:38:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.667 11:38:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:37.667 11:38:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:37.667 11:38:22 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.667 11:38:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:37.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.667 --rc genhtml_branch_coverage=1 00:04:37.667 --rc genhtml_function_coverage=1 00:04:37.667 --rc genhtml_legend=1 00:04:37.667 --rc geninfo_all_blocks=1 00:04:37.667 --rc geninfo_unexecuted_blocks=1 00:04:37.667 00:04:37.667 ' 00:04:37.667 11:38:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:37.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.667 --rc genhtml_branch_coverage=1 00:04:37.667 --rc genhtml_function_coverage=1 00:04:37.667 --rc genhtml_legend=1 00:04:37.667 --rc geninfo_all_blocks=1 00:04:37.667 --rc geninfo_unexecuted_blocks=1 00:04:37.667 00:04:37.667 ' 00:04:37.667 11:38:22 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:37.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.667 --rc genhtml_branch_coverage=1 00:04:37.667 --rc genhtml_function_coverage=1 00:04:37.667 --rc genhtml_legend=1 00:04:37.667 --rc geninfo_all_blocks=1 00:04:37.667 --rc geninfo_unexecuted_blocks=1 00:04:37.667 00:04:37.667 ' 00:04:37.667 11:38:22 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:37.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.667 --rc genhtml_branch_coverage=1 00:04:37.667 --rc genhtml_function_coverage=1 00:04:37.667 --rc genhtml_legend=1 00:04:37.667 --rc geninfo_all_blocks=1 00:04:37.667 --rc geninfo_unexecuted_blocks=1 00:04:37.667 00:04:37.667 ' 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:37.667 11:38:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:37.667 11:38:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.667 11:38:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.667 11:38:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.667 11:38:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:37.667 11:38:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:37.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:37.667 11:38:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:37.667 INFO: launching applications... 00:04:37.667 11:38:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:37.667 11:38:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:37.667 11:38:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:37.667 11:38:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.667 11:38:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.667 11:38:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.667 11:38:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.667 11:38:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.667 11:38:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=785854 00:04:37.668 11:38:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.668 Waiting for target to run... 00:04:37.668 11:38:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 785854 /var/tmp/spdk_tgt.sock 00:04:37.668 11:38:22 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 785854 ']' 00:04:37.668 11:38:22 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.668 11:38:22 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.668 11:38:22 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:37.668 11:38:22 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.668 11:38:22 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.668 11:38:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.668 [2024-10-11 11:38:22.239091] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:37.668 [2024-10-11 11:38:22.239163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785854 ] 00:04:37.928 [2024-10-11 11:38:22.515091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.928 [2024-10-11 11:38:22.544556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.499 11:38:23 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.499 11:38:23 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:38.499 11:38:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:38.499 00:04:38.499 11:38:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:38.499 INFO: shutting down applications... 00:04:38.499 11:38:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:38.499 11:38:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:38.499 11:38:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.499 11:38:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 785854 ]] 00:04:38.499 11:38:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 785854 00:04:38.499 11:38:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.499 11:38:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.499 11:38:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 785854 00:04:38.499 11:38:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.071 11:38:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.071 11:38:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.071 11:38:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 785854 00:04:39.071 11:38:23 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.071 11:38:23 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:39.071 11:38:23 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.071 11:38:23 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.071 SPDK target shutdown done 00:04:39.071 11:38:23 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:39.071 Success 00:04:39.071 00:04:39.071 real 0m1.561s 00:04:39.071 user 0m1.181s 00:04:39.071 sys 0m0.409s 00:04:39.071 11:38:23 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.071 11:38:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.071 ************************************ 00:04:39.071 END TEST json_config_extra_key 00:04:39.071 ************************************ 00:04:39.071 11:38:23 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.071 11:38:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.071 11:38:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.071 11:38:23 -- common/autotest_common.sh@10 -- # set +x 00:04:39.071 ************************************ 00:04:39.071 START TEST alias_rpc 00:04:39.071 ************************************ 00:04:39.071 11:38:23 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.333 * Looking for test storage... 00:04:39.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:39.333 11:38:23 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.333 11:38:23 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.333 11:38:23 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.333 11:38:23 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.333 11:38:23 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:39.333 11:38:23 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.333 11:38:23 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.333 --rc genhtml_branch_coverage=1 00:04:39.333 --rc genhtml_function_coverage=1 00:04:39.333 --rc genhtml_legend=1 00:04:39.333 --rc geninfo_all_blocks=1 00:04:39.333 --rc geninfo_unexecuted_blocks=1 00:04:39.333 00:04:39.333 ' 00:04:39.333 11:38:23 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.333 --rc genhtml_branch_coverage=1 00:04:39.333 --rc genhtml_function_coverage=1 00:04:39.333 --rc genhtml_legend=1 00:04:39.333 --rc geninfo_all_blocks=1 00:04:39.333 --rc geninfo_unexecuted_blocks=1 00:04:39.333 00:04:39.333 ' 00:04:39.333 11:38:23 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.333 --rc genhtml_branch_coverage=1 00:04:39.333 --rc genhtml_function_coverage=1 00:04:39.333 --rc genhtml_legend=1 00:04:39.333 --rc geninfo_all_blocks=1 00:04:39.333 --rc geninfo_unexecuted_blocks=1 00:04:39.333 00:04:39.333 ' 00:04:39.333 11:38:23 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.333 --rc genhtml_branch_coverage=1 00:04:39.333 --rc genhtml_function_coverage=1 00:04:39.333 --rc genhtml_legend=1 00:04:39.333 --rc geninfo_all_blocks=1 00:04:39.333 --rc geninfo_unexecuted_blocks=1 00:04:39.333 00:04:39.333 ' 00:04:39.333 11:38:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:39.333 11:38:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=786239 00:04:39.333 11:38:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 786239 00:04:39.334 11:38:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.334 11:38:23 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 786239 ']' 00:04:39.334 11:38:23 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.334 11:38:23 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.334 11:38:23 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.334 11:38:23 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.334 11:38:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.334 [2024-10-11 11:38:23.870962] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:39.334 [2024-10-11 11:38:23.871028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786239 ] 00:04:39.334 [2024-10-11 11:38:23.950985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.594 [2024-10-11 11:38:23.986345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.166 11:38:24 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.166 11:38:24 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:40.166 11:38:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:40.427 11:38:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 786239 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 786239 ']' 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 786239 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 786239 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 786239' 00:04:40.427 killing process with pid 786239 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@969 -- # kill 786239 00:04:40.427 11:38:24 alias_rpc -- common/autotest_common.sh@974 -- # wait 786239 00:04:40.688 00:04:40.688 real 0m1.488s 00:04:40.688 user 0m1.629s 00:04:40.688 sys 0m0.421s 00:04:40.688 11:38:25 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.688 11:38:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.688 ************************************ 00:04:40.688 END TEST alias_rpc 00:04:40.688 ************************************ 00:04:40.688 11:38:25 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:40.688 11:38:25 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:40.688 11:38:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.688 11:38:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.688 11:38:25 -- common/autotest_common.sh@10 -- # set +x 00:04:40.688 ************************************ 00:04:40.688 START TEST spdkcli_tcp 00:04:40.688 ************************************ 00:04:40.688 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:40.688 * Looking for test storage... 00:04:40.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:40.688 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:40.688 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:40.688 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.950 11:38:25 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.950 --rc genhtml_branch_coverage=1 00:04:40.950 --rc genhtml_function_coverage=1 00:04:40.950 --rc genhtml_legend=1 00:04:40.950 --rc geninfo_all_blocks=1 00:04:40.950 --rc geninfo_unexecuted_blocks=1 00:04:40.950 00:04:40.950 ' 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.950 --rc genhtml_branch_coverage=1 00:04:40.950 --rc genhtml_function_coverage=1 00:04:40.950 --rc genhtml_legend=1 00:04:40.950 --rc geninfo_all_blocks=1 00:04:40.950 --rc geninfo_unexecuted_blocks=1 00:04:40.950 00:04:40.950 ' 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.950 --rc genhtml_branch_coverage=1 00:04:40.950 --rc genhtml_function_coverage=1 00:04:40.950 --rc genhtml_legend=1 00:04:40.950 --rc geninfo_all_blocks=1 00:04:40.950 --rc geninfo_unexecuted_blocks=1 00:04:40.950 00:04:40.950 ' 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:40.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.950 --rc genhtml_branch_coverage=1 00:04:40.950 --rc genhtml_function_coverage=1 00:04:40.950 --rc genhtml_legend=1 00:04:40.950 --rc geninfo_all_blocks=1 00:04:40.950 --rc geninfo_unexecuted_blocks=1 00:04:40.950 00:04:40.950 ' 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=786645 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 786645 00:04:40.950 11:38:25 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 786645 ']' 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.950 11:38:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.950 [2024-10-11 11:38:25.442492] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:40.950 [2024-10-11 11:38:25.442562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786645 ] 00:04:40.950 [2024-10-11 11:38:25.523479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.950 [2024-10-11 11:38:25.560253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.950 [2024-10-11 11:38:25.560253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.893 11:38:26 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.893 11:38:26 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:41.893 11:38:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:41.893 11:38:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=786685 00:04:41.893 11:38:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:41.893 [ 00:04:41.893 "bdev_malloc_delete", 00:04:41.893 "bdev_malloc_create", 00:04:41.893 "bdev_null_resize", 00:04:41.893 "bdev_null_delete", 00:04:41.893 "bdev_null_create", 00:04:41.893 "bdev_nvme_cuse_unregister", 00:04:41.893 "bdev_nvme_cuse_register", 00:04:41.893 "bdev_opal_new_user", 00:04:41.893 "bdev_opal_set_lock_state", 00:04:41.893 "bdev_opal_delete", 00:04:41.893 "bdev_opal_get_info", 00:04:41.893 "bdev_opal_create", 00:04:41.893 "bdev_nvme_opal_revert", 00:04:41.893 "bdev_nvme_opal_init", 00:04:41.893 "bdev_nvme_send_cmd", 00:04:41.893 "bdev_nvme_set_keys", 00:04:41.893 "bdev_nvme_get_path_iostat", 00:04:41.893 "bdev_nvme_get_mdns_discovery_info", 00:04:41.893 "bdev_nvme_stop_mdns_discovery", 00:04:41.893 "bdev_nvme_start_mdns_discovery", 00:04:41.893 "bdev_nvme_set_multipath_policy", 00:04:41.893 "bdev_nvme_set_preferred_path", 00:04:41.893 "bdev_nvme_get_io_paths", 00:04:41.893 "bdev_nvme_remove_error_injection", 00:04:41.893 "bdev_nvme_add_error_injection", 00:04:41.893 "bdev_nvme_get_discovery_info", 00:04:41.893 "bdev_nvme_stop_discovery", 00:04:41.893 "bdev_nvme_start_discovery", 00:04:41.893 "bdev_nvme_get_controller_health_info", 00:04:41.893 "bdev_nvme_disable_controller", 00:04:41.893 "bdev_nvme_enable_controller", 00:04:41.893 "bdev_nvme_reset_controller", 00:04:41.893 "bdev_nvme_get_transport_statistics", 00:04:41.893 "bdev_nvme_apply_firmware", 00:04:41.893 "bdev_nvme_detach_controller", 00:04:41.893 "bdev_nvme_get_controllers", 00:04:41.893 "bdev_nvme_attach_controller", 00:04:41.893 "bdev_nvme_set_hotplug", 00:04:41.893 "bdev_nvme_set_options", 00:04:41.893 "bdev_passthru_delete", 00:04:41.893 "bdev_passthru_create", 00:04:41.893 "bdev_lvol_set_parent_bdev", 00:04:41.893 "bdev_lvol_set_parent", 00:04:41.893 "bdev_lvol_check_shallow_copy", 00:04:41.893 "bdev_lvol_start_shallow_copy", 00:04:41.893 "bdev_lvol_grow_lvstore", 00:04:41.893 "bdev_lvol_get_lvols", 00:04:41.893 "bdev_lvol_get_lvstores", 00:04:41.893 "bdev_lvol_delete", 00:04:41.893 "bdev_lvol_set_read_only", 00:04:41.893 "bdev_lvol_resize", 00:04:41.893 "bdev_lvol_decouple_parent", 00:04:41.893 "bdev_lvol_inflate", 00:04:41.893 "bdev_lvol_rename", 00:04:41.893 "bdev_lvol_clone_bdev", 00:04:41.893 "bdev_lvol_clone", 00:04:41.893 "bdev_lvol_snapshot", 00:04:41.893 "bdev_lvol_create", 00:04:41.893 "bdev_lvol_delete_lvstore", 00:04:41.893 "bdev_lvol_rename_lvstore", 00:04:41.893 "bdev_lvol_create_lvstore", 00:04:41.893 "bdev_raid_set_options", 00:04:41.893 "bdev_raid_remove_base_bdev", 00:04:41.893 "bdev_raid_add_base_bdev", 00:04:41.893 "bdev_raid_delete", 00:04:41.893 "bdev_raid_create", 00:04:41.893 "bdev_raid_get_bdevs", 00:04:41.893 "bdev_error_inject_error", 00:04:41.893 "bdev_error_delete", 00:04:41.893 "bdev_error_create", 00:04:41.893 "bdev_split_delete", 00:04:41.893 "bdev_split_create", 00:04:41.893 "bdev_delay_delete", 00:04:41.893 "bdev_delay_create", 00:04:41.893 "bdev_delay_update_latency", 00:04:41.893 "bdev_zone_block_delete", 00:04:41.893 "bdev_zone_block_create", 00:04:41.893 "blobfs_create", 00:04:41.893 "blobfs_detect", 00:04:41.893 "blobfs_set_cache_size", 00:04:41.893 "bdev_aio_delete", 00:04:41.893 "bdev_aio_rescan", 00:04:41.893 "bdev_aio_create", 00:04:41.893 "bdev_ftl_set_property", 00:04:41.893 "bdev_ftl_get_properties", 00:04:41.893 "bdev_ftl_get_stats", 00:04:41.893 "bdev_ftl_unmap", 00:04:41.893 "bdev_ftl_unload", 00:04:41.893 "bdev_ftl_delete", 00:04:41.893 "bdev_ftl_load", 00:04:41.893 "bdev_ftl_create", 00:04:41.893 "bdev_virtio_attach_controller", 00:04:41.893 "bdev_virtio_scsi_get_devices", 00:04:41.893 "bdev_virtio_detach_controller", 00:04:41.893 "bdev_virtio_blk_set_hotplug", 00:04:41.893 "bdev_iscsi_delete", 00:04:41.893 "bdev_iscsi_create", 00:04:41.893 "bdev_iscsi_set_options", 00:04:41.893 "accel_error_inject_error", 00:04:41.893 "ioat_scan_accel_module", 00:04:41.893 "dsa_scan_accel_module", 00:04:41.894 "iaa_scan_accel_module", 00:04:41.894 "vfu_virtio_create_fs_endpoint", 00:04:41.894 "vfu_virtio_create_scsi_endpoint", 00:04:41.894 "vfu_virtio_scsi_remove_target", 00:04:41.894 "vfu_virtio_scsi_add_target", 00:04:41.894 "vfu_virtio_create_blk_endpoint", 00:04:41.894 "vfu_virtio_delete_endpoint", 00:04:41.894 "keyring_file_remove_key", 00:04:41.894 "keyring_file_add_key", 00:04:41.894 "keyring_linux_set_options", 00:04:41.894 "fsdev_aio_delete", 00:04:41.894 "fsdev_aio_create", 00:04:41.894 "iscsi_get_histogram", 00:04:41.894 "iscsi_enable_histogram", 00:04:41.894 "iscsi_set_options", 00:04:41.894 "iscsi_get_auth_groups", 00:04:41.894 "iscsi_auth_group_remove_secret", 00:04:41.894 "iscsi_auth_group_add_secret", 00:04:41.894 "iscsi_delete_auth_group", 00:04:41.894 "iscsi_create_auth_group", 00:04:41.894 "iscsi_set_discovery_auth", 00:04:41.894 "iscsi_get_options", 00:04:41.894 "iscsi_target_node_request_logout", 00:04:41.894 "iscsi_target_node_set_redirect", 00:04:41.894 "iscsi_target_node_set_auth", 00:04:41.894 "iscsi_target_node_add_lun", 00:04:41.894 "iscsi_get_stats", 00:04:41.894 "iscsi_get_connections", 00:04:41.894 "iscsi_portal_group_set_auth", 00:04:41.894 "iscsi_start_portal_group", 00:04:41.894 "iscsi_delete_portal_group", 00:04:41.894 "iscsi_create_portal_group", 00:04:41.894 "iscsi_get_portal_groups", 00:04:41.894 "iscsi_delete_target_node", 00:04:41.894 "iscsi_target_node_remove_pg_ig_maps", 00:04:41.894 "iscsi_target_node_add_pg_ig_maps", 00:04:41.894 "iscsi_create_target_node", 00:04:41.894 "iscsi_get_target_nodes", 00:04:41.894 "iscsi_delete_initiator_group", 00:04:41.894 "iscsi_initiator_group_remove_initiators", 00:04:41.894 "iscsi_initiator_group_add_initiators", 00:04:41.894 "iscsi_create_initiator_group", 00:04:41.894 "iscsi_get_initiator_groups", 00:04:41.894 "nvmf_set_crdt", 00:04:41.894 "nvmf_set_config", 00:04:41.894 "nvmf_set_max_subsystems", 00:04:41.894 "nvmf_stop_mdns_prr", 00:04:41.894 "nvmf_publish_mdns_prr", 00:04:41.894 "nvmf_subsystem_get_listeners", 00:04:41.894 "nvmf_subsystem_get_qpairs", 00:04:41.894 "nvmf_subsystem_get_controllers", 00:04:41.894 "nvmf_get_stats", 00:04:41.894 "nvmf_get_transports", 00:04:41.894 "nvmf_create_transport", 00:04:41.894 "nvmf_get_targets", 00:04:41.894 "nvmf_delete_target", 00:04:41.894 "nvmf_create_target", 00:04:41.894 "nvmf_subsystem_allow_any_host", 00:04:41.894 "nvmf_subsystem_set_keys", 00:04:41.894 "nvmf_subsystem_remove_host", 00:04:41.894 "nvmf_subsystem_add_host", 00:04:41.894 "nvmf_ns_remove_host", 00:04:41.894 "nvmf_ns_add_host", 00:04:41.894 "nvmf_subsystem_remove_ns", 00:04:41.894 "nvmf_subsystem_set_ns_ana_group", 00:04:41.894 "nvmf_subsystem_add_ns", 00:04:41.894 "nvmf_subsystem_listener_set_ana_state", 00:04:41.894 "nvmf_discovery_get_referrals", 00:04:41.894 "nvmf_discovery_remove_referral", 00:04:41.894 "nvmf_discovery_add_referral", 00:04:41.894 "nvmf_subsystem_remove_listener", 00:04:41.894 "nvmf_subsystem_add_listener", 00:04:41.894 "nvmf_delete_subsystem", 00:04:41.894 "nvmf_create_subsystem", 00:04:41.894 "nvmf_get_subsystems", 00:04:41.894 "env_dpdk_get_mem_stats", 00:04:41.894 "nbd_get_disks", 00:04:41.894 "nbd_stop_disk", 00:04:41.894 "nbd_start_disk", 00:04:41.894 "ublk_recover_disk", 00:04:41.894 "ublk_get_disks", 00:04:41.894 "ublk_stop_disk", 00:04:41.894 "ublk_start_disk", 00:04:41.894 "ublk_destroy_target", 00:04:41.894 "ublk_create_target", 00:04:41.894 "virtio_blk_create_transport", 00:04:41.894 "virtio_blk_get_transports", 00:04:41.894 "vhost_controller_set_coalescing", 00:04:41.894 "vhost_get_controllers", 00:04:41.894 "vhost_delete_controller", 00:04:41.894 "vhost_create_blk_controller", 00:04:41.894 "vhost_scsi_controller_remove_target", 00:04:41.894 "vhost_scsi_controller_add_target", 00:04:41.894 "vhost_start_scsi_controller", 00:04:41.894 "vhost_create_scsi_controller", 00:04:41.894 "thread_set_cpumask", 00:04:41.894 "scheduler_set_options", 00:04:41.894 "framework_get_governor", 00:04:41.894 "framework_get_scheduler", 00:04:41.894 "framework_set_scheduler", 00:04:41.894 "framework_get_reactors", 00:04:41.894 "thread_get_io_channels", 00:04:41.894 "thread_get_pollers", 00:04:41.894 "thread_get_stats", 00:04:41.894 "framework_monitor_context_switch", 00:04:41.894 "spdk_kill_instance", 00:04:41.894 "log_enable_timestamps", 00:04:41.894 "log_get_flags", 00:04:41.894 "log_clear_flag", 00:04:41.894 "log_set_flag", 00:04:41.894 "log_get_level", 00:04:41.894 "log_set_level", 00:04:41.894 "log_get_print_level", 00:04:41.894 "log_set_print_level", 00:04:41.894 "framework_enable_cpumask_locks", 00:04:41.894 "framework_disable_cpumask_locks", 00:04:41.894 "framework_wait_init", 00:04:41.894 "framework_start_init", 00:04:41.894 "scsi_get_devices", 00:04:41.894 "bdev_get_histogram", 00:04:41.894 "bdev_enable_histogram", 00:04:41.894 "bdev_set_qos_limit", 00:04:41.894 "bdev_set_qd_sampling_period", 00:04:41.894 "bdev_get_bdevs", 00:04:41.894 "bdev_reset_iostat", 00:04:41.894 "bdev_get_iostat", 00:04:41.894 "bdev_examine", 00:04:41.894 "bdev_wait_for_examine", 00:04:41.894 "bdev_set_options", 00:04:41.894 "accel_get_stats", 00:04:41.894 "accel_set_options", 00:04:41.894 "accel_set_driver", 00:04:41.894 "accel_crypto_key_destroy", 00:04:41.894 "accel_crypto_keys_get", 00:04:41.894 "accel_crypto_key_create", 00:04:41.894 "accel_assign_opc", 00:04:41.894 "accel_get_module_info", 00:04:41.894 "accel_get_opc_assignments", 00:04:41.894 "vmd_rescan", 00:04:41.894 "vmd_remove_device", 00:04:41.894 "vmd_enable", 00:04:41.894 "sock_get_default_impl", 00:04:41.894 "sock_set_default_impl", 00:04:41.894 "sock_impl_set_options", 00:04:41.894 "sock_impl_get_options", 00:04:41.894 "iobuf_get_stats", 00:04:41.894 "iobuf_set_options", 00:04:41.894 "keyring_get_keys", 00:04:41.894 "vfu_tgt_set_base_path", 00:04:41.894 "framework_get_pci_devices", 00:04:41.894 "framework_get_config", 00:04:41.894 "framework_get_subsystems", 00:04:41.894 "fsdev_set_opts", 00:04:41.894 "fsdev_get_opts", 00:04:41.894 "trace_get_info", 00:04:41.894 "trace_get_tpoint_group_mask", 00:04:41.894 "trace_disable_tpoint_group", 00:04:41.894 "trace_enable_tpoint_group", 00:04:41.894 "trace_clear_tpoint_mask", 00:04:41.894 "trace_set_tpoint_mask", 00:04:41.894 "notify_get_notifications", 00:04:41.894 "notify_get_types", 00:04:41.894 "spdk_get_version", 00:04:41.894 "rpc_get_methods" 00:04:41.894 ] 00:04:41.894 11:38:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:41.894 11:38:26 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:41.894 11:38:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.894 11:38:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:41.894 11:38:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 786645 00:04:41.894 11:38:26 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 786645 ']' 00:04:41.894 11:38:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 786645 00:04:41.894 11:38:26 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:41.894 11:38:26 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.894 11:38:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 786645 00:04:42.155 11:38:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.155 11:38:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.155 11:38:26 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 786645' 00:04:42.155 killing process with pid 786645 00:04:42.155 11:38:26 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 786645 00:04:42.155 11:38:26 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 786645 00:04:42.155 00:04:42.155 real 0m1.543s 00:04:42.155 user 0m2.829s 00:04:42.155 sys 0m0.472s 00:04:42.155 11:38:26 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.155 11:38:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.155 ************************************ 00:04:42.155 END TEST spdkcli_tcp 00:04:42.155 ************************************ 00:04:42.155 11:38:26 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.155 11:38:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.155 11:38:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.155 11:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:42.416 ************************************ 00:04:42.416 START TEST dpdk_mem_utility 00:04:42.416 ************************************ 00:04:42.416 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.416 * Looking for test storage... 00:04:42.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:42.416 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.416 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.416 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.416 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:42.416 11:38:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.417 11:38:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.417 --rc genhtml_branch_coverage=1 00:04:42.417 --rc genhtml_function_coverage=1 00:04:42.417 --rc genhtml_legend=1 00:04:42.417 --rc geninfo_all_blocks=1 00:04:42.417 --rc geninfo_unexecuted_blocks=1 00:04:42.417 00:04:42.417 ' 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.417 --rc genhtml_branch_coverage=1 00:04:42.417 --rc genhtml_function_coverage=1 00:04:42.417 --rc genhtml_legend=1 00:04:42.417 --rc geninfo_all_blocks=1 00:04:42.417 --rc geninfo_unexecuted_blocks=1 00:04:42.417 00:04:42.417 ' 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.417 --rc genhtml_branch_coverage=1 00:04:42.417 --rc genhtml_function_coverage=1 00:04:42.417 --rc genhtml_legend=1 00:04:42.417 --rc geninfo_all_blocks=1 00:04:42.417 --rc geninfo_unexecuted_blocks=1 00:04:42.417 00:04:42.417 ' 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.417 --rc genhtml_branch_coverage=1 00:04:42.417 --rc genhtml_function_coverage=1 00:04:42.417 --rc genhtml_legend=1 00:04:42.417 --rc geninfo_all_blocks=1 00:04:42.417 --rc geninfo_unexecuted_blocks=1 00:04:42.417 00:04:42.417 ' 00:04:42.417 11:38:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:42.417 11:38:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=787054 00:04:42.417 11:38:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 787054 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 787054 ']' 00:04:42.417 11:38:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.417 11:38:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.678 [2024-10-11 11:38:27.055139] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:42.678 [2024-10-11 11:38:27.055208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787054 ] 00:04:42.678 [2024-10-11 11:38:27.136442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.678 [2024-10-11 11:38:27.171869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.248 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.248 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:43.248 11:38:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:43.248 11:38:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:43.248 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.248 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.248 { 00:04:43.248 "filename": "/tmp/spdk_mem_dump.txt" 00:04:43.248 } 00:04:43.248 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.248 11:38:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:43.509 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:43.509 1 heaps totaling size 810.000000 MiB 00:04:43.509 size: 810.000000 MiB heap id: 0 00:04:43.509 end heaps---------- 00:04:43.509 9 mempools totaling size 595.772034 MiB 00:04:43.509 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:43.509 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:43.509 size: 92.545471 MiB name: bdev_io_787054 00:04:43.509 size: 50.003479 MiB name: msgpool_787054 00:04:43.509 size: 36.509338 MiB name: fsdev_io_787054 00:04:43.509 size: 21.763794 MiB name: PDU_Pool 00:04:43.509 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:43.509 size: 4.133484 MiB name: evtpool_787054 00:04:43.509 size: 0.026123 MiB name: Session_Pool 00:04:43.509 end mempools------- 00:04:43.509 6 memzones totaling size 4.142822 MiB 00:04:43.509 size: 1.000366 MiB name: RG_ring_0_787054 00:04:43.509 size: 1.000366 MiB name: RG_ring_1_787054 00:04:43.509 size: 1.000366 MiB name: RG_ring_4_787054 00:04:43.509 size: 1.000366 MiB name: RG_ring_5_787054 00:04:43.509 size: 0.125366 MiB name: RG_ring_2_787054 00:04:43.509 size: 0.015991 MiB name: RG_ring_3_787054 00:04:43.509 end memzones------- 00:04:43.509 11:38:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:43.509 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:43.509 list of free elements. size: 10.862488 MiB 00:04:43.509 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:43.509 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:43.509 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:43.509 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:43.509 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:43.510 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:43.510 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:43.510 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:43.510 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:43.510 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:43.510 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:43.510 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:43.510 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:43.510 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:43.510 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:43.510 list of standard malloc elements. size: 199.218628 MiB 00:04:43.510 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:43.510 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:43.510 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:43.510 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:43.510 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:43.510 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:43.510 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:43.510 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:43.510 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:43.510 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:43.510 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:43.510 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:43.510 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:43.510 list of memzone associated elements. size: 599.918884 MiB 00:04:43.510 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:43.510 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:43.510 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:43.510 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:43.510 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:43.510 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_787054_0 00:04:43.510 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:43.510 associated memzone info: size: 48.002930 MiB name: MP_msgpool_787054_0 00:04:43.510 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:43.510 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_787054_0 00:04:43.510 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:43.510 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:43.510 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:43.510 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:43.510 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:43.510 associated memzone info: size: 3.000122 MiB name: MP_evtpool_787054_0 00:04:43.510 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:43.510 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_787054 00:04:43.510 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:43.510 associated memzone info: size: 1.007996 MiB name: MP_evtpool_787054 00:04:43.510 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:43.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:43.510 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:43.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:43.510 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:43.510 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:43.510 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:43.510 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:43.510 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:43.510 associated memzone info: size: 1.000366 MiB name: RG_ring_0_787054 00:04:43.510 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:43.510 associated memzone info: size: 1.000366 MiB name: RG_ring_1_787054 00:04:43.510 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:43.510 associated memzone info: size: 1.000366 MiB name: RG_ring_4_787054 00:04:43.510 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:43.510 associated memzone info: size: 1.000366 MiB name: RG_ring_5_787054 00:04:43.510 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:43.510 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_787054 00:04:43.510 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:43.510 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_787054 00:04:43.510 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:43.510 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:43.510 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:43.510 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:43.510 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:43.510 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:43.510 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:43.510 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_787054 00:04:43.510 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:43.510 associated memzone info: size: 0.125366 MiB name: RG_ring_2_787054 00:04:43.510 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:43.510 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:43.510 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:43.510 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:43.510 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:43.510 associated memzone info: size: 0.015991 MiB name: RG_ring_3_787054 00:04:43.510 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:43.510 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:43.510 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:43.510 associated memzone info: size: 0.000183 MiB name: MP_msgpool_787054 00:04:43.510 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:43.510 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_787054 00:04:43.510 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:43.510 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_787054 00:04:43.510 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:43.510 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:43.510 11:38:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:43.510 11:38:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 787054 00:04:43.510 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 787054 ']' 00:04:43.510 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 787054 00:04:43.510 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:43.510 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.510 11:38:27 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 787054 00:04:43.510 11:38:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.510 11:38:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.510 11:38:28 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 787054' 00:04:43.510 killing process with pid 787054 00:04:43.510 11:38:28 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 787054 00:04:43.510 11:38:28 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 787054 00:04:43.771 00:04:43.771 real 0m1.390s 00:04:43.771 user 0m1.480s 00:04:43.771 sys 0m0.399s 00:04:43.771 11:38:28 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.771 11:38:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.771 ************************************ 00:04:43.771 END TEST dpdk_mem_utility 00:04:43.771 ************************************ 00:04:43.771 11:38:28 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:43.771 11:38:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.771 11:38:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.771 11:38:28 -- common/autotest_common.sh@10 -- # set +x 00:04:43.771 ************************************ 00:04:43.771 START TEST event 00:04:43.771 ************************************ 00:04:43.771 11:38:28 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:43.771 * Looking for test storage... 00:04:43.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:43.771 11:38:28 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:43.771 11:38:28 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:43.771 11:38:28 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:44.032 11:38:28 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:44.032 11:38:28 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.032 11:38:28 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.032 11:38:28 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.032 11:38:28 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.032 11:38:28 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.032 11:38:28 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.032 11:38:28 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.032 11:38:28 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.032 11:38:28 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.032 11:38:28 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.032 11:38:28 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.032 11:38:28 event -- scripts/common.sh@344 -- # case "$op" in 00:04:44.032 11:38:28 event -- scripts/common.sh@345 -- # : 1 00:04:44.032 11:38:28 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.032 11:38:28 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.032 11:38:28 event -- scripts/common.sh@365 -- # decimal 1 00:04:44.032 11:38:28 event -- scripts/common.sh@353 -- # local d=1 00:04:44.032 11:38:28 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.032 11:38:28 event -- scripts/common.sh@355 -- # echo 1 00:04:44.032 11:38:28 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.032 11:38:28 event -- scripts/common.sh@366 -- # decimal 2 00:04:44.032 11:38:28 event -- scripts/common.sh@353 -- # local d=2 00:04:44.032 11:38:28 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.032 11:38:28 event -- scripts/common.sh@355 -- # echo 2 00:04:44.032 11:38:28 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.032 11:38:28 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.032 11:38:28 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.032 11:38:28 event -- scripts/common.sh@368 -- # return 0 00:04:44.032 11:38:28 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.032 11:38:28 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:44.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.032 --rc genhtml_branch_coverage=1 00:04:44.032 --rc genhtml_function_coverage=1 00:04:44.032 --rc genhtml_legend=1 00:04:44.032 --rc geninfo_all_blocks=1 00:04:44.032 --rc geninfo_unexecuted_blocks=1 00:04:44.032 00:04:44.032 ' 00:04:44.032 11:38:28 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:44.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.032 --rc genhtml_branch_coverage=1 00:04:44.032 --rc genhtml_function_coverage=1 00:04:44.032 --rc genhtml_legend=1 00:04:44.032 --rc geninfo_all_blocks=1 00:04:44.032 --rc geninfo_unexecuted_blocks=1 00:04:44.032 00:04:44.032 ' 00:04:44.032 11:38:28 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:44.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.032 --rc genhtml_branch_coverage=1 00:04:44.032 --rc genhtml_function_coverage=1 00:04:44.032 --rc genhtml_legend=1 00:04:44.032 --rc geninfo_all_blocks=1 00:04:44.032 --rc geninfo_unexecuted_blocks=1 00:04:44.032 00:04:44.032 ' 00:04:44.032 11:38:28 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:44.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.032 --rc genhtml_branch_coverage=1 00:04:44.032 --rc genhtml_function_coverage=1 00:04:44.032 --rc genhtml_legend=1 00:04:44.032 --rc geninfo_all_blocks=1 00:04:44.032 --rc geninfo_unexecuted_blocks=1 00:04:44.032 00:04:44.032 ' 00:04:44.032 11:38:28 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:44.032 11:38:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:44.032 11:38:28 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.032 11:38:28 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:44.032 11:38:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.032 11:38:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.032 ************************************ 00:04:44.032 START TEST event_perf 00:04:44.032 ************************************ 00:04:44.032 11:38:28 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.032 Running I/O for 1 seconds...[2024-10-11 11:38:28.524619] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:44.032 [2024-10-11 11:38:28.524736] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787449 ] 00:04:44.032 [2024-10-11 11:38:28.608943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.032 [2024-10-11 11:38:28.653829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.032 [2024-10-11 11:38:28.654040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.032 [2024-10-11 11:38:28.654196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.032 [2024-10-11 11:38:28.654196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.417 Running I/O for 1 seconds... 00:04:45.417 lcore 0: 179970 00:04:45.417 lcore 1: 179973 00:04:45.417 lcore 2: 179972 00:04:45.417 lcore 3: 179973 00:04:45.417 done. 00:04:45.417 00:04:45.417 real 0m1.178s 00:04:45.417 user 0m4.089s 00:04:45.417 sys 0m0.087s 00:04:45.417 11:38:29 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.417 11:38:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.417 ************************************ 00:04:45.417 END TEST event_perf 00:04:45.417 ************************************ 00:04:45.417 11:38:29 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:45.417 11:38:29 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:45.417 11:38:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.417 11:38:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.417 ************************************ 00:04:45.417 START TEST event_reactor 00:04:45.417 ************************************ 00:04:45.417 11:38:29 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:45.417 [2024-10-11 11:38:29.776964] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:45.417 [2024-10-11 11:38:29.777066] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787571 ] 00:04:45.417 [2024-10-11 11:38:29.857859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.417 [2024-10-11 11:38:29.898176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.358 test_start 00:04:46.358 oneshot 00:04:46.358 tick 100 00:04:46.358 tick 100 00:04:46.358 tick 250 00:04:46.358 tick 100 00:04:46.358 tick 100 00:04:46.358 tick 100 00:04:46.358 tick 250 00:04:46.358 tick 500 00:04:46.358 tick 100 00:04:46.358 tick 100 00:04:46.358 tick 250 00:04:46.358 tick 100 00:04:46.358 tick 100 00:04:46.358 test_end 00:04:46.358 00:04:46.358 real 0m1.169s 00:04:46.358 user 0m1.088s 00:04:46.358 sys 0m0.077s 00:04:46.358 11:38:30 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.358 11:38:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.358 ************************************ 00:04:46.358 END TEST event_reactor 00:04:46.358 ************************************ 00:04:46.358 11:38:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.358 11:38:30 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:46.358 11:38:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.358 11:38:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.619 ************************************ 00:04:46.619 START TEST event_reactor_perf 00:04:46.619 ************************************ 00:04:46.619 11:38:31 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.619 [2024-10-11 11:38:31.025204] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:46.619 [2024-10-11 11:38:31.025299] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787845 ] 00:04:46.619 [2024-10-11 11:38:31.106965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.619 [2024-10-11 11:38:31.145846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.561 test_start 00:04:47.561 test_end 00:04:47.561 Performance: 542137 events per second 00:04:47.561 00:04:47.561 real 0m1.169s 00:04:47.561 user 0m1.084s 00:04:47.561 sys 0m0.081s 00:04:47.561 11:38:32 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.561 11:38:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.561 ************************************ 00:04:47.561 END TEST event_reactor_perf 00:04:47.561 ************************************ 00:04:47.822 11:38:32 event -- event/event.sh@49 -- # uname -s 00:04:47.822 11:38:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.822 11:38:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:47.822 11:38:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.822 11:38:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.822 11:38:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.822 ************************************ 00:04:47.822 START TEST event_scheduler 00:04:47.822 ************************************ 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:47.822 * Looking for test storage... 00:04:47.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.822 11:38:32 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.822 --rc genhtml_branch_coverage=1 00:04:47.822 --rc genhtml_function_coverage=1 00:04:47.822 --rc genhtml_legend=1 00:04:47.822 --rc geninfo_all_blocks=1 00:04:47.822 --rc geninfo_unexecuted_blocks=1 00:04:47.822 00:04:47.822 ' 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.822 --rc genhtml_branch_coverage=1 00:04:47.822 --rc genhtml_function_coverage=1 00:04:47.822 --rc genhtml_legend=1 00:04:47.822 --rc geninfo_all_blocks=1 00:04:47.822 --rc geninfo_unexecuted_blocks=1 00:04:47.822 00:04:47.822 ' 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.822 --rc genhtml_branch_coverage=1 00:04:47.822 --rc genhtml_function_coverage=1 00:04:47.822 --rc genhtml_legend=1 00:04:47.822 --rc geninfo_all_blocks=1 00:04:47.822 --rc geninfo_unexecuted_blocks=1 00:04:47.822 00:04:47.822 ' 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.822 --rc genhtml_branch_coverage=1 00:04:47.822 --rc genhtml_function_coverage=1 00:04:47.822 --rc genhtml_legend=1 00:04:47.822 --rc geninfo_all_blocks=1 00:04:47.822 --rc geninfo_unexecuted_blocks=1 00:04:47.822 00:04:47.822 ' 00:04:47.822 11:38:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.822 11:38:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=788233 00:04:47.822 11:38:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.822 11:38:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.822 11:38:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 788233 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 788233 ']' 00:04:47.822 11:38:32 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.823 11:38:32 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.823 11:38:32 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.823 11:38:32 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.823 11:38:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.083 [2024-10-11 11:38:32.502485] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:48.083 [2024-10-11 11:38:32.502554] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788233 ] 00:04:48.083 [2024-10-11 11:38:32.585661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.083 [2024-10-11 11:38:32.641324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.083 [2024-10-11 11:38:32.641486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.083 [2024-10-11 11:38:32.641647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.083 [2024-10-11 11:38:32.641648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:49.024 11:38:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 [2024-10-11 11:38:33.323989] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:49.024 [2024-10-11 11:38:33.324008] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:49.024 [2024-10-11 11:38:33.324017] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:49.024 [2024-10-11 11:38:33.324023] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:49.024 [2024-10-11 11:38:33.324029] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 [2024-10-11 11:38:33.390603] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 ************************************ 00:04:49.024 START TEST scheduler_create_thread 00:04:49.024 ************************************ 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 2 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 3 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 4 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 5 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 6 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 7 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 8 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.024 9 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.024 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.596 10 00:04:49.596 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.596 11:38:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:49.596 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.596 11:38:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.982 11:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.982 11:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:50.982 11:38:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:50.982 11:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:50.982 11:38:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.553 11:38:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.553 11:38:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:51.553 11:38:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.553 11:38:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.496 11:38:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.496 11:38:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:52.496 11:38:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:52.496 11:38:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.496 11:38:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.067 11:38:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.067 00:04:53.067 real 0m4.225s 00:04:53.067 user 0m0.026s 00:04:53.067 sys 0m0.006s 00:04:53.067 11:38:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.067 11:38:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.067 ************************************ 00:04:53.067 END TEST scheduler_create_thread 00:04:53.067 ************************************ 00:04:53.067 11:38:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:53.067 11:38:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 788233 00:04:53.067 11:38:37 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 788233 ']' 00:04:53.067 11:38:37 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 788233 00:04:53.067 11:38:37 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:53.328 11:38:37 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.328 11:38:37 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 788233 00:04:53.328 11:38:37 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:53.328 11:38:37 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:53.328 11:38:37 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 788233' 00:04:53.328 killing process with pid 788233 00:04:53.328 11:38:37 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 788233 00:04:53.328 11:38:37 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 788233 00:04:53.590 [2024-10-11 11:38:38.036445] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:53.590 00:04:53.590 real 0m5.937s 00:04:53.590 user 0m13.904s 00:04:53.590 sys 0m0.404s 00:04:53.590 11:38:38 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.590 11:38:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.590 ************************************ 00:04:53.590 END TEST event_scheduler 00:04:53.590 ************************************ 00:04:53.851 11:38:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:53.851 11:38:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:53.851 11:38:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.851 11:38:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.851 11:38:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.851 ************************************ 00:04:53.851 START TEST app_repeat 00:04:53.851 ************************************ 00:04:53.851 11:38:38 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=789430 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 789430' 00:04:53.851 Process app_repeat pid: 789430 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:53.851 spdk_app_start Round 0 00:04:53.851 11:38:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 789430 /var/tmp/spdk-nbd.sock 00:04:53.851 11:38:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 789430 ']' 00:04:53.851 11:38:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.851 11:38:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.851 11:38:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.852 11:38:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.852 11:38:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.852 [2024-10-11 11:38:38.314044] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:04:53.852 [2024-10-11 11:38:38.314110] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789430 ] 00:04:53.852 [2024-10-11 11:38:38.395506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.852 [2024-10-11 11:38:38.430349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.852 [2024-10-11 11:38:38.430349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.112 11:38:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.112 11:38:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:54.112 11:38:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.112 Malloc0 00:04:54.112 11:38:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.374 Malloc1 00:04:54.374 11:38:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.374 11:38:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.636 /dev/nbd0 00:04:54.636 11:38:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.636 11:38:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.636 1+0 records in 00:04:54.636 1+0 records out 00:04:54.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026969 s, 15.2 MB/s 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:54.636 11:38:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:54.636 11:38:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.636 11:38:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.636 11:38:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.636 /dev/nbd1 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.897 1+0 records in 00:04:54.897 1+0 records out 00:04:54.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236215 s, 17.3 MB/s 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:54.897 11:38:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.897 { 00:04:54.897 "nbd_device": "/dev/nbd0", 00:04:54.897 "bdev_name": "Malloc0" 00:04:54.897 }, 00:04:54.897 { 00:04:54.897 "nbd_device": "/dev/nbd1", 00:04:54.897 "bdev_name": "Malloc1" 00:04:54.897 } 00:04:54.897 ]' 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.897 { 00:04:54.897 "nbd_device": "/dev/nbd0", 00:04:54.897 "bdev_name": "Malloc0" 00:04:54.897 }, 00:04:54.897 { 00:04:54.897 "nbd_device": "/dev/nbd1", 00:04:54.897 "bdev_name": "Malloc1" 00:04:54.897 } 00:04:54.897 ]' 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.897 11:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.897 /dev/nbd1' 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.159 /dev/nbd1' 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.159 256+0 records in 00:04:55.159 256+0 records out 00:04:55.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127197 s, 82.4 MB/s 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.159 256+0 records in 00:04:55.159 256+0 records out 00:04:55.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118464 s, 88.5 MB/s 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.159 256+0 records in 00:04:55.159 256+0 records out 00:04:55.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126927 s, 82.6 MB/s 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.159 11:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.420 11:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.682 11:38:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.682 11:38:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.943 11:38:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.943 [2024-10-11 11:38:40.494124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.943 [2024-10-11 11:38:40.524190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.943 [2024-10-11 11:38:40.524190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.943 [2024-10-11 11:38:40.553030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.943 [2024-10-11 11:38:40.553060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.245 11:38:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.245 11:38:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:59.245 spdk_app_start Round 1 00:04:59.245 11:38:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 789430 /var/tmp/spdk-nbd.sock 00:04:59.245 11:38:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 789430 ']' 00:04:59.245 11:38:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.245 11:38:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.245 11:38:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.245 11:38:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.245 11:38:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.245 11:38:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.245 11:38:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:59.245 11:38:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.245 Malloc0 00:04:59.245 11:38:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.506 Malloc1 00:04:59.506 11:38:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.506 11:38:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.767 /dev/nbd0 00:04:59.767 11:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.767 11:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.767 1+0 records in 00:04:59.767 1+0 records out 00:04:59.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283767 s, 14.4 MB/s 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:59.767 11:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.767 11:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.767 11:38:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.767 /dev/nbd1 00:04:59.767 11:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.767 11:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:59.767 11:38:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.028 1+0 records in 00:05:00.028 1+0 records out 00:05:00.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299098 s, 13.7 MB/s 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:00.028 11:38:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.028 { 00:05:00.028 "nbd_device": "/dev/nbd0", 00:05:00.028 "bdev_name": "Malloc0" 00:05:00.028 }, 00:05:00.028 { 00:05:00.028 "nbd_device": "/dev/nbd1", 00:05:00.028 "bdev_name": "Malloc1" 00:05:00.028 } 00:05:00.028 ]' 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.028 { 00:05:00.028 "nbd_device": "/dev/nbd0", 00:05:00.028 "bdev_name": "Malloc0" 00:05:00.028 }, 00:05:00.028 { 00:05:00.028 "nbd_device": "/dev/nbd1", 00:05:00.028 "bdev_name": "Malloc1" 00:05:00.028 } 00:05:00.028 ]' 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.028 /dev/nbd1' 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.028 /dev/nbd1' 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.028 11:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.289 256+0 records in 00:05:00.289 256+0 records out 00:05:00.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127734 s, 82.1 MB/s 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.289 256+0 records in 00:05:00.289 256+0 records out 00:05:00.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120765 s, 86.8 MB/s 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.289 256+0 records in 00:05:00.289 256+0 records out 00:05:00.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127007 s, 82.6 MB/s 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.289 11:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.550 11:38:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.550 11:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.550 11:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.550 11:38:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.550 11:38:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.550 11:38:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.550 11:38:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.550 11:38:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.550 11:38:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.811 11:38:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.811 11:38:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.071 11:38:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.071 [2024-10-11 11:38:45.628179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.071 [2024-10-11 11:38:45.657064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.071 [2024-10-11 11:38:45.657064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.071 [2024-10-11 11:38:45.686392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.072 [2024-10-11 11:38:45.686422] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.369 11:38:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.370 11:38:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.370 spdk_app_start Round 2 00:05:04.370 11:38:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 789430 /var/tmp/spdk-nbd.sock 00:05:04.370 11:38:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 789430 ']' 00:05:04.370 11:38:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.370 11:38:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.370 11:38:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.370 11:38:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.370 11:38:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.370 11:38:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.370 11:38:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:04.370 11:38:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.370 Malloc0 00:05:04.370 11:38:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.630 Malloc1 00:05:04.630 11:38:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.631 11:38:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.631 /dev/nbd0 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.891 1+0 records in 00:05:04.891 1+0 records out 00:05:04.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278635 s, 14.7 MB/s 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.891 /dev/nbd1 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.891 1+0 records in 00:05:04.891 1+0 records out 00:05:04.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266648 s, 15.4 MB/s 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:04.891 11:38:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.891 11:38:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.152 { 00:05:05.152 "nbd_device": "/dev/nbd0", 00:05:05.152 "bdev_name": "Malloc0" 00:05:05.152 }, 00:05:05.152 { 00:05:05.152 "nbd_device": "/dev/nbd1", 00:05:05.152 "bdev_name": "Malloc1" 00:05:05.152 } 00:05:05.152 ]' 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.152 { 00:05:05.152 "nbd_device": "/dev/nbd0", 00:05:05.152 "bdev_name": "Malloc0" 00:05:05.152 }, 00:05:05.152 { 00:05:05.152 "nbd_device": "/dev/nbd1", 00:05:05.152 "bdev_name": "Malloc1" 00:05:05.152 } 00:05:05.152 ]' 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.152 /dev/nbd1' 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.152 /dev/nbd1' 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.152 256+0 records in 00:05:05.152 256+0 records out 00:05:05.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122079 s, 85.9 MB/s 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.152 11:38:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.414 256+0 records in 00:05:05.414 256+0 records out 00:05:05.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115629 s, 90.7 MB/s 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.414 256+0 records in 00:05:05.414 256+0 records out 00:05:05.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130936 s, 80.1 MB/s 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.414 11:38:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.414 11:38:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.414 11:38:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.414 11:38:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.414 11:38:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.414 11:38:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.414 11:38:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.414 11:38:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.414 11:38:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.676 11:38:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.937 11:38:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.937 11:38:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.198 11:38:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.198 [2024-10-11 11:38:50.708008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.198 [2024-10-11 11:38:50.738241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.198 [2024-10-11 11:38:50.738241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.198 [2024-10-11 11:38:50.767644] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.198 [2024-10-11 11:38:50.767676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.500 11:38:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 789430 /var/tmp/spdk-nbd.sock 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 789430 ']' 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:09.500 11:38:53 event.app_repeat -- event/event.sh@39 -- # killprocess 789430 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 789430 ']' 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 789430 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 789430 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 789430' 00:05:09.500 killing process with pid 789430 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@969 -- # kill 789430 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@974 -- # wait 789430 00:05:09.500 spdk_app_start is called in Round 0. 00:05:09.500 Shutdown signal received, stop current app iteration 00:05:09.500 Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 reinitialization... 00:05:09.500 spdk_app_start is called in Round 1. 00:05:09.500 Shutdown signal received, stop current app iteration 00:05:09.500 Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 reinitialization... 00:05:09.500 spdk_app_start is called in Round 2. 00:05:09.500 Shutdown signal received, stop current app iteration 00:05:09.500 Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 reinitialization... 00:05:09.500 spdk_app_start is called in Round 3. 00:05:09.500 Shutdown signal received, stop current app iteration 00:05:09.500 11:38:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:09.500 11:38:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:09.500 00:05:09.500 real 0m15.695s 00:05:09.500 user 0m34.730s 00:05:09.500 sys 0m2.240s 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.500 11:38:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.500 ************************************ 00:05:09.500 END TEST app_repeat 00:05:09.500 ************************************ 00:05:09.500 11:38:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:09.500 11:38:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:09.500 11:38:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.500 11:38:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.500 11:38:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.500 ************************************ 00:05:09.500 START TEST cpu_locks 00:05:09.500 ************************************ 00:05:09.500 11:38:54 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:09.762 * Looking for test storage... 00:05:09.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:09.762 11:38:54 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:09.762 11:38:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:09.762 11:38:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:09.762 11:38:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:09.762 11:38:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.762 11:38:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.763 11:38:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:09.763 11:38:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.763 11:38:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:09.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.763 --rc genhtml_branch_coverage=1 00:05:09.763 --rc genhtml_function_coverage=1 00:05:09.763 --rc genhtml_legend=1 00:05:09.763 --rc geninfo_all_blocks=1 00:05:09.763 --rc geninfo_unexecuted_blocks=1 00:05:09.763 00:05:09.763 ' 00:05:09.763 11:38:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:09.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.763 --rc genhtml_branch_coverage=1 00:05:09.763 --rc genhtml_function_coverage=1 00:05:09.763 --rc genhtml_legend=1 00:05:09.763 --rc geninfo_all_blocks=1 00:05:09.763 --rc geninfo_unexecuted_blocks=1 00:05:09.763 00:05:09.763 ' 00:05:09.763 11:38:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:09.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.763 --rc genhtml_branch_coverage=1 00:05:09.763 --rc genhtml_function_coverage=1 00:05:09.763 --rc genhtml_legend=1 00:05:09.763 --rc geninfo_all_blocks=1 00:05:09.763 --rc geninfo_unexecuted_blocks=1 00:05:09.763 00:05:09.763 ' 00:05:09.763 11:38:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:09.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.763 --rc genhtml_branch_coverage=1 00:05:09.763 --rc genhtml_function_coverage=1 00:05:09.763 --rc genhtml_legend=1 00:05:09.763 --rc geninfo_all_blocks=1 00:05:09.763 --rc geninfo_unexecuted_blocks=1 00:05:09.763 00:05:09.763 ' 00:05:09.763 11:38:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:09.763 11:38:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:09.763 11:38:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:09.763 11:38:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:09.763 11:38:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.763 11:38:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.763 11:38:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.763 ************************************ 00:05:09.763 START TEST default_locks 00:05:09.763 ************************************ 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=792889 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 792889 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 792889 ']' 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.763 11:38:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.763 [2024-10-11 11:38:54.343165] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:09.763 [2024-10-11 11:38:54.343225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792889 ] 00:05:10.025 [2024-10-11 11:38:54.423970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.025 [2024-10-11 11:38:54.459810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.598 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.598 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:10.598 11:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 792889 00:05:10.598 11:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 792889 00:05:10.598 11:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.859 lslocks: write error 00:05:10.859 11:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 792889 00:05:10.859 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 792889 ']' 00:05:10.859 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 792889 00:05:10.859 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:10.859 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.859 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 792889 00:05:10.859 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 792889' 00:05:11.119 killing process with pid 792889 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 792889 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 792889 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 792889 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 792889 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 792889 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 792889 ']' 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (792889) - No such process 00:05:11.119 ERROR: process (pid: 792889) is no longer running 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:11.119 11:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.120 11:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.120 11:38:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.120 00:05:11.120 real 0m1.409s 00:05:11.120 user 0m1.508s 00:05:11.120 sys 0m0.496s 00:05:11.120 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.120 11:38:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.120 ************************************ 00:05:11.120 END TEST default_locks 00:05:11.120 ************************************ 00:05:11.120 11:38:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:11.120 11:38:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.120 11:38:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.120 11:38:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.385 ************************************ 00:05:11.385 START TEST default_locks_via_rpc 00:05:11.385 ************************************ 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=793249 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 793249 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 793249 ']' 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.385 11:38:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.385 [2024-10-11 11:38:55.826386] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:11.385 [2024-10-11 11:38:55.826444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793249 ] 00:05:11.385 [2024-10-11 11:38:55.904636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.385 [2024-10-11 11:38:55.940628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 793249 00:05:12.327 11:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 793249 00:05:12.328 11:38:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 793249 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 793249 ']' 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 793249 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 793249 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 793249' 00:05:12.588 killing process with pid 793249 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 793249 00:05:12.588 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 793249 00:05:12.849 00:05:12.849 real 0m1.540s 00:05:12.849 user 0m1.702s 00:05:12.849 sys 0m0.495s 00:05:12.849 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.849 11:38:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.849 ************************************ 00:05:12.849 END TEST default_locks_via_rpc 00:05:12.849 ************************************ 00:05:12.849 11:38:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:12.849 11:38:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.849 11:38:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.849 11:38:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.849 ************************************ 00:05:12.849 START TEST non_locking_app_on_locked_coremask 00:05:12.849 ************************************ 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=793620 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 793620 /var/tmp/spdk.sock 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 793620 ']' 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.849 11:38:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.849 [2024-10-11 11:38:57.433109] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:12.849 [2024-10-11 11:38:57.433162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793620 ] 00:05:13.109 [2024-10-11 11:38:57.511016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.109 [2024-10-11 11:38:57.543144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=793634 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 793634 /var/tmp/spdk2.sock 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 793634 ']' 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.680 11:38:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.680 [2024-10-11 11:38:58.273265] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:13.680 [2024-10-11 11:38:58.273321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793634 ] 00:05:13.941 [2024-10-11 11:38:58.344204] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.941 [2024-10-11 11:38:58.344226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.941 [2024-10-11 11:38:58.406851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.512 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.512 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:14.512 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 793620 00:05:14.512 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.512 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 793620 00:05:15.084 lslocks: write error 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 793620 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 793620 ']' 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 793620 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 793620 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 793620' 00:05:15.084 killing process with pid 793620 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 793620 00:05:15.084 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 793620 00:05:15.344 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 793634 00:05:15.344 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 793634 ']' 00:05:15.344 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 793634 00:05:15.344 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:15.344 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.344 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 793634 00:05:15.606 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.606 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.606 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 793634' 00:05:15.606 killing process with pid 793634 00:05:15.606 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 793634 00:05:15.606 11:38:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 793634 00:05:15.606 00:05:15.606 real 0m2.803s 00:05:15.606 user 0m3.169s 00:05:15.606 sys 0m0.811s 00:05:15.606 11:39:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.606 11:39:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.606 ************************************ 00:05:15.606 END TEST non_locking_app_on_locked_coremask 00:05:15.606 ************************************ 00:05:15.606 11:39:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:15.606 11:39:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.606 11:39:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.606 11:39:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.867 ************************************ 00:05:15.867 START TEST locking_app_on_unlocked_coremask 00:05:15.867 ************************************ 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=794167 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 794167 /var/tmp/spdk.sock 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 794167 ']' 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.867 11:39:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.867 [2024-10-11 11:39:00.312705] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:15.867 [2024-10-11 11:39:00.312765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794167 ] 00:05:15.867 [2024-10-11 11:39:00.393333] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.867 [2024-10-11 11:39:00.393374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.867 [2024-10-11 11:39:00.435923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=794410 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 794410 /var/tmp/spdk2.sock 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 794410 ']' 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.810 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.810 [2024-10-11 11:39:01.167657] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:16.810 [2024-10-11 11:39:01.167718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794410 ] 00:05:16.810 [2024-10-11 11:39:01.240411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.810 [2024-10-11 11:39:01.302842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.418 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.418 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:17.418 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 794410 00:05:17.418 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 794410 00:05:17.418 11:39:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.359 lslocks: write error 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 794167 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 794167 ']' 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 794167 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 794167 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 794167' 00:05:18.359 killing process with pid 794167 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 794167 00:05:18.359 11:39:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 794167 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 794410 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 794410 ']' 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 794410 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 794410 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 794410' 00:05:18.619 killing process with pid 794410 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 794410 00:05:18.619 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 794410 00:05:18.880 00:05:18.880 real 0m3.033s 00:05:18.880 user 0m3.365s 00:05:18.880 sys 0m0.939s 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.880 ************************************ 00:05:18.880 END TEST locking_app_on_unlocked_coremask 00:05:18.880 ************************************ 00:05:18.880 11:39:03 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:18.880 11:39:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.880 11:39:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.880 11:39:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.880 ************************************ 00:05:18.880 START TEST locking_app_on_locked_coremask 00:05:18.880 ************************************ 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=794833 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 794833 /var/tmp/spdk.sock 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 794833 ']' 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.880 11:39:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.880 [2024-10-11 11:39:03.418595] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:18.880 [2024-10-11 11:39:03.418650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794833 ] 00:05:18.880 [2024-10-11 11:39:03.497855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.140 [2024-10-11 11:39:03.531851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=795158 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 795158 /var/tmp/spdk2.sock 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 795158 /var/tmp/spdk2.sock 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 795158 /var/tmp/spdk2.sock 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 795158 ']' 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.711 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.711 [2024-10-11 11:39:04.263611] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:19.711 [2024-10-11 11:39:04.263664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795158 ] 00:05:19.711 [2024-10-11 11:39:04.335371] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 794833 has claimed it. 00:05:19.711 [2024-10-11 11:39:04.335402] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:20.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (795158) - No such process 00:05:20.281 ERROR: process (pid: 795158) is no longer running 00:05:20.281 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.281 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:20.281 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:20.281 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.281 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.281 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.281 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 794833 00:05:20.281 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 794833 00:05:20.281 11:39:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.542 lslocks: write error 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 794833 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 794833 ']' 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 794833 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 794833 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 794833' 00:05:20.542 killing process with pid 794833 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 794833 00:05:20.542 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 794833 00:05:20.803 00:05:20.803 real 0m1.963s 00:05:20.803 user 0m2.234s 00:05:20.803 sys 0m0.521s 00:05:20.803 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.803 11:39:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.803 ************************************ 00:05:20.803 END TEST locking_app_on_locked_coremask 00:05:20.803 ************************************ 00:05:20.803 11:39:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.803 11:39:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.803 11:39:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.803 11:39:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.803 ************************************ 00:05:20.803 START TEST locking_overlapped_coremask 00:05:20.803 ************************************ 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=795357 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 795357 /var/tmp/spdk.sock 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 795357 ']' 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.803 11:39:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.063 [2024-10-11 11:39:05.457760] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:21.063 [2024-10-11 11:39:05.457815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795357 ] 00:05:21.063 [2024-10-11 11:39:05.535402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.063 [2024-10-11 11:39:05.571254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.063 [2024-10-11 11:39:05.571407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.063 [2024-10-11 11:39:05.571408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=795538 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 795538 /var/tmp/spdk2.sock 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 795538 /var/tmp/spdk2.sock 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 795538 /var/tmp/spdk2.sock 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 795538 ']' 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.633 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.893 [2024-10-11 11:39:06.316393] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:21.893 [2024-10-11 11:39:06.316446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795538 ] 00:05:21.893 [2024-10-11 11:39:06.405568] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 795357 has claimed it. 00:05:21.893 [2024-10-11 11:39:06.405609] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (795538) - No such process 00:05:22.464 ERROR: process (pid: 795538) is no longer running 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 795357 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 795357 ']' 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 795357 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 795357 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 795357' 00:05:22.464 killing process with pid 795357 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 795357 00:05:22.464 11:39:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 795357 00:05:22.736 00:05:22.736 real 0m1.782s 00:05:22.736 user 0m5.186s 00:05:22.736 sys 0m0.385s 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.736 ************************************ 00:05:22.736 END TEST locking_overlapped_coremask 00:05:22.736 ************************************ 00:05:22.736 11:39:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:22.736 11:39:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.736 11:39:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.736 11:39:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.736 ************************************ 00:05:22.736 START TEST locking_overlapped_coremask_via_rpc 00:05:22.736 ************************************ 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=795798 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 795798 /var/tmp/spdk.sock 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 795798 ']' 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.736 11:39:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.736 [2024-10-11 11:39:07.318831] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:22.736 [2024-10-11 11:39:07.318885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795798 ] 00:05:23.002 [2024-10-11 11:39:07.398351] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.002 [2024-10-11 11:39:07.398387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.002 [2024-10-11 11:39:07.438368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.003 [2024-10-11 11:39:07.438519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.003 [2024-10-11 11:39:07.438521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=795907 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 795907 /var/tmp/spdk2.sock 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 795907 ']' 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.613 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.613 [2024-10-11 11:39:08.147925] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:23.613 [2024-10-11 11:39:08.147966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795907 ] 00:05:23.613 [2024-10-11 11:39:08.231430] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.613 [2024-10-11 11:39:08.231462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.899 [2024-10-11 11:39:08.309533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.899 [2024-10-11 11:39:08.309600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.899 [2024-10-11 11:39:08.309601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.500 [2024-10-11 11:39:08.978745] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 795798 has claimed it. 00:05:24.500 request: 00:05:24.500 { 00:05:24.500 "method": "framework_enable_cpumask_locks", 00:05:24.500 "req_id": 1 00:05:24.500 } 00:05:24.500 Got JSON-RPC error response 00:05:24.500 response: 00:05:24.500 { 00:05:24.500 "code": -32603, 00:05:24.500 "message": "Failed to claim CPU core: 2" 00:05:24.500 } 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 795798 /var/tmp/spdk.sock 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 795798 ']' 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.500 11:39:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 795907 /var/tmp/spdk2.sock 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 795907 ']' 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.790 00:05:24.790 real 0m2.078s 00:05:24.790 user 0m0.858s 00:05:24.790 sys 0m0.147s 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.790 11:39:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.790 ************************************ 00:05:24.790 END TEST locking_overlapped_coremask_via_rpc 00:05:24.790 ************************************ 00:05:24.790 11:39:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:24.790 11:39:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 795798 ]] 00:05:24.790 11:39:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 795798 00:05:24.790 11:39:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 795798 ']' 00:05:24.790 11:39:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 795798 00:05:24.790 11:39:09 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:24.790 11:39:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.790 11:39:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 795798 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 795798' 00:05:25.072 killing process with pid 795798 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 795798 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 795798 00:05:25.072 11:39:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 795907 ]] 00:05:25.072 11:39:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 795907 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 795907 ']' 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 795907 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 795907 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 795907' 00:05:25.072 killing process with pid 795907 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 795907 00:05:25.072 11:39:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 795907 00:05:25.373 11:39:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.373 11:39:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:25.373 11:39:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 795798 ]] 00:05:25.373 11:39:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 795798 00:05:25.373 11:39:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 795798 ']' 00:05:25.373 11:39:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 795798 00:05:25.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (795798) - No such process 00:05:25.373 11:39:09 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 795798 is not found' 00:05:25.373 Process with pid 795798 is not found 00:05:25.373 11:39:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 795907 ]] 00:05:25.373 11:39:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 795907 00:05:25.373 11:39:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 795907 ']' 00:05:25.373 11:39:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 795907 00:05:25.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (795907) - No such process 00:05:25.373 11:39:09 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 795907 is not found' 00:05:25.373 Process with pid 795907 is not found 00:05:25.373 11:39:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.373 00:05:25.373 real 0m15.844s 00:05:25.373 user 0m28.034s 00:05:25.373 sys 0m4.723s 00:05:25.373 11:39:09 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.373 11:39:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.373 ************************************ 00:05:25.373 END TEST cpu_locks 00:05:25.373 ************************************ 00:05:25.373 00:05:25.373 real 0m41.670s 00:05:25.373 user 1m23.232s 00:05:25.373 sys 0m8.024s 00:05:25.373 11:39:09 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.373 11:39:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.373 ************************************ 00:05:25.373 END TEST event 00:05:25.373 ************************************ 00:05:25.373 11:39:09 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:25.373 11:39:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.373 11:39:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.373 11:39:09 -- common/autotest_common.sh@10 -- # set +x 00:05:25.674 ************************************ 00:05:25.674 START TEST thread 00:05:25.674 ************************************ 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:25.674 * Looking for test storage... 00:05:25.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.674 11:39:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.674 11:39:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.674 11:39:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.674 11:39:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.674 11:39:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.674 11:39:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.674 11:39:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.674 11:39:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.674 11:39:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.674 11:39:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.674 11:39:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.674 11:39:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:25.674 11:39:10 thread -- scripts/common.sh@345 -- # : 1 00:05:25.674 11:39:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.674 11:39:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.674 11:39:10 thread -- scripts/common.sh@365 -- # decimal 1 00:05:25.674 11:39:10 thread -- scripts/common.sh@353 -- # local d=1 00:05:25.674 11:39:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.674 11:39:10 thread -- scripts/common.sh@355 -- # echo 1 00:05:25.674 11:39:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.674 11:39:10 thread -- scripts/common.sh@366 -- # decimal 2 00:05:25.674 11:39:10 thread -- scripts/common.sh@353 -- # local d=2 00:05:25.674 11:39:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.674 11:39:10 thread -- scripts/common.sh@355 -- # echo 2 00:05:25.674 11:39:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.674 11:39:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.674 11:39:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.674 11:39:10 thread -- scripts/common.sh@368 -- # return 0 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.674 --rc genhtml_branch_coverage=1 00:05:25.674 --rc genhtml_function_coverage=1 00:05:25.674 --rc genhtml_legend=1 00:05:25.674 --rc geninfo_all_blocks=1 00:05:25.674 --rc geninfo_unexecuted_blocks=1 00:05:25.674 00:05:25.674 ' 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.674 --rc genhtml_branch_coverage=1 00:05:25.674 --rc genhtml_function_coverage=1 00:05:25.674 --rc genhtml_legend=1 00:05:25.674 --rc geninfo_all_blocks=1 00:05:25.674 --rc geninfo_unexecuted_blocks=1 00:05:25.674 00:05:25.674 ' 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.674 --rc genhtml_branch_coverage=1 00:05:25.674 --rc genhtml_function_coverage=1 00:05:25.674 --rc genhtml_legend=1 00:05:25.674 --rc geninfo_all_blocks=1 00:05:25.674 --rc geninfo_unexecuted_blocks=1 00:05:25.674 00:05:25.674 ' 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.674 --rc genhtml_branch_coverage=1 00:05:25.674 --rc genhtml_function_coverage=1 00:05:25.674 --rc genhtml_legend=1 00:05:25.674 --rc geninfo_all_blocks=1 00:05:25.674 --rc geninfo_unexecuted_blocks=1 00:05:25.674 00:05:25.674 ' 00:05:25.674 11:39:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.674 11:39:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.674 ************************************ 00:05:25.674 START TEST thread_poller_perf 00:05:25.674 ************************************ 00:05:25.674 11:39:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.674 [2024-10-11 11:39:10.262925] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:25.674 [2024-10-11 11:39:10.263025] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796744 ] 00:05:25.935 [2024-10-11 11:39:10.345024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.935 [2024-10-11 11:39:10.388112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.935 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:26.876 [2024-10-11T09:39:11.508Z] ====================================== 00:05:26.876 [2024-10-11T09:39:11.508Z] busy:2410034066 (cyc) 00:05:26.876 [2024-10-11T09:39:11.508Z] total_run_count: 417000 00:05:26.876 [2024-10-11T09:39:11.508Z] tsc_hz: 2400000000 (cyc) 00:05:26.876 [2024-10-11T09:39:11.508Z] ====================================== 00:05:26.876 [2024-10-11T09:39:11.508Z] poller_cost: 5779 (cyc), 2407 (nsec) 00:05:26.876 00:05:26.876 real 0m1.182s 00:05:26.876 user 0m1.094s 00:05:26.876 sys 0m0.084s 00:05:26.876 11:39:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.876 11:39:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.876 ************************************ 00:05:26.876 END TEST thread_poller_perf 00:05:26.876 ************************************ 00:05:26.876 11:39:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:26.876 11:39:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:26.876 11:39:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.876 11:39:11 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.876 ************************************ 00:05:26.876 START TEST thread_poller_perf 00:05:26.876 ************************************ 00:05:26.876 11:39:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:27.136 [2024-10-11 11:39:11.524503] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:27.137 [2024-10-11 11:39:11.524589] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797187 ] 00:05:27.137 [2024-10-11 11:39:11.603450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.137 [2024-10-11 11:39:11.634259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.137 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:28.079 [2024-10-11T09:39:12.711Z] ====================================== 00:05:28.079 [2024-10-11T09:39:12.711Z] busy:2401466946 (cyc) 00:05:28.079 [2024-10-11T09:39:12.711Z] total_run_count: 5502000 00:05:28.079 [2024-10-11T09:39:12.711Z] tsc_hz: 2400000000 (cyc) 00:05:28.079 [2024-10-11T09:39:12.711Z] ====================================== 00:05:28.080 [2024-10-11T09:39:12.712Z] poller_cost: 436 (cyc), 181 (nsec) 00:05:28.080 00:05:28.080 real 0m1.159s 00:05:28.080 user 0m1.076s 00:05:28.080 sys 0m0.079s 00:05:28.080 11:39:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.080 11:39:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.080 ************************************ 00:05:28.080 END TEST thread_poller_perf 00:05:28.080 ************************************ 00:05:28.080 11:39:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:28.080 00:05:28.080 real 0m2.693s 00:05:28.080 user 0m2.342s 00:05:28.080 sys 0m0.365s 00:05:28.080 11:39:12 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.080 11:39:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.080 ************************************ 00:05:28.080 END TEST thread 00:05:28.080 ************************************ 00:05:28.341 11:39:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:28.341 11:39:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:28.341 11:39:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.341 11:39:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.341 11:39:12 -- common/autotest_common.sh@10 -- # set +x 00:05:28.341 ************************************ 00:05:28.341 START TEST app_cmdline 00:05:28.341 ************************************ 00:05:28.341 11:39:12 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:28.341 * Looking for test storage... 00:05:28.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:28.341 11:39:12 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.341 11:39:12 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.341 11:39:12 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.341 11:39:12 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.341 11:39:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.341 11:39:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:28.342 11:39:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.603 11:39:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.603 11:39:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.603 11:39:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.603 --rc genhtml_branch_coverage=1 00:05:28.603 --rc genhtml_function_coverage=1 00:05:28.603 --rc genhtml_legend=1 00:05:28.603 --rc geninfo_all_blocks=1 00:05:28.603 --rc geninfo_unexecuted_blocks=1 00:05:28.603 00:05:28.603 ' 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.603 --rc genhtml_branch_coverage=1 00:05:28.603 --rc genhtml_function_coverage=1 00:05:28.603 --rc genhtml_legend=1 00:05:28.603 --rc geninfo_all_blocks=1 00:05:28.603 --rc geninfo_unexecuted_blocks=1 00:05:28.603 00:05:28.603 ' 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.603 --rc genhtml_branch_coverage=1 00:05:28.603 --rc genhtml_function_coverage=1 00:05:28.603 --rc genhtml_legend=1 00:05:28.603 --rc geninfo_all_blocks=1 00:05:28.603 --rc geninfo_unexecuted_blocks=1 00:05:28.603 00:05:28.603 ' 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.603 --rc genhtml_branch_coverage=1 00:05:28.603 --rc genhtml_function_coverage=1 00:05:28.603 --rc genhtml_legend=1 00:05:28.603 --rc geninfo_all_blocks=1 00:05:28.603 --rc geninfo_unexecuted_blocks=1 00:05:28.603 00:05:28.603 ' 00:05:28.603 11:39:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:28.603 11:39:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=797587 00:05:28.603 11:39:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 797587 00:05:28.603 11:39:12 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 797587 ']' 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.603 11:39:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:28.603 [2024-10-11 11:39:13.035874] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:28.603 [2024-10-11 11:39:13.035942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797587 ] 00:05:28.603 [2024-10-11 11:39:13.118657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.603 [2024-10-11 11:39:13.160052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.547 11:39:13 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.547 11:39:13 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:29.547 11:39:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:29.547 { 00:05:29.547 "version": "SPDK v25.01-pre git sha1 5031f0f3b", 00:05:29.547 "fields": { 00:05:29.547 "major": 25, 00:05:29.547 "minor": 1, 00:05:29.547 "patch": 0, 00:05:29.547 "suffix": "-pre", 00:05:29.547 "commit": "5031f0f3b" 00:05:29.547 } 00:05:29.547 } 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:29.547 11:39:14 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:29.547 11:39:14 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.808 request: 00:05:29.808 { 00:05:29.808 "method": "env_dpdk_get_mem_stats", 00:05:29.808 "req_id": 1 00:05:29.808 } 00:05:29.808 Got JSON-RPC error response 00:05:29.808 response: 00:05:29.808 { 00:05:29.808 "code": -32601, 00:05:29.808 "message": "Method not found" 00:05:29.808 } 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.808 11:39:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 797587 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 797587 ']' 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 797587 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 797587 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 797587' 00:05:29.808 killing process with pid 797587 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@969 -- # kill 797587 00:05:29.808 11:39:14 app_cmdline -- common/autotest_common.sh@974 -- # wait 797587 00:05:30.069 00:05:30.069 real 0m1.689s 00:05:30.069 user 0m2.024s 00:05:30.069 sys 0m0.446s 00:05:30.069 11:39:14 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.069 11:39:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.069 ************************************ 00:05:30.069 END TEST app_cmdline 00:05:30.069 ************************************ 00:05:30.069 11:39:14 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:30.069 11:39:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.069 11:39:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.069 11:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.069 ************************************ 00:05:30.069 START TEST version 00:05:30.069 ************************************ 00:05:30.069 11:39:14 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:30.069 * Looking for test storage... 00:05:30.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:30.069 11:39:14 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.069 11:39:14 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.069 11:39:14 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.330 11:39:14 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.330 11:39:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.330 11:39:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.330 11:39:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.330 11:39:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.330 11:39:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.330 11:39:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.330 11:39:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.330 11:39:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.330 11:39:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.330 11:39:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.330 11:39:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.330 11:39:14 version -- scripts/common.sh@344 -- # case "$op" in 00:05:30.330 11:39:14 version -- scripts/common.sh@345 -- # : 1 00:05:30.330 11:39:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.330 11:39:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.330 11:39:14 version -- scripts/common.sh@365 -- # decimal 1 00:05:30.330 11:39:14 version -- scripts/common.sh@353 -- # local d=1 00:05:30.330 11:39:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.330 11:39:14 version -- scripts/common.sh@355 -- # echo 1 00:05:30.330 11:39:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.330 11:39:14 version -- scripts/common.sh@366 -- # decimal 2 00:05:30.330 11:39:14 version -- scripts/common.sh@353 -- # local d=2 00:05:30.330 11:39:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.330 11:39:14 version -- scripts/common.sh@355 -- # echo 2 00:05:30.330 11:39:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.330 11:39:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.330 11:39:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.330 11:39:14 version -- scripts/common.sh@368 -- # return 0 00:05:30.330 11:39:14 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.330 11:39:14 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.330 --rc genhtml_branch_coverage=1 00:05:30.330 --rc genhtml_function_coverage=1 00:05:30.330 --rc genhtml_legend=1 00:05:30.330 --rc geninfo_all_blocks=1 00:05:30.330 --rc geninfo_unexecuted_blocks=1 00:05:30.330 00:05:30.330 ' 00:05:30.330 11:39:14 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.330 --rc genhtml_branch_coverage=1 00:05:30.330 --rc genhtml_function_coverage=1 00:05:30.330 --rc genhtml_legend=1 00:05:30.330 --rc geninfo_all_blocks=1 00:05:30.330 --rc geninfo_unexecuted_blocks=1 00:05:30.330 00:05:30.330 ' 00:05:30.330 11:39:14 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.330 --rc genhtml_branch_coverage=1 00:05:30.330 --rc genhtml_function_coverage=1 00:05:30.330 --rc genhtml_legend=1 00:05:30.330 --rc geninfo_all_blocks=1 00:05:30.330 --rc geninfo_unexecuted_blocks=1 00:05:30.330 00:05:30.330 ' 00:05:30.330 11:39:14 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.330 --rc genhtml_branch_coverage=1 00:05:30.330 --rc genhtml_function_coverage=1 00:05:30.330 --rc genhtml_legend=1 00:05:30.330 --rc geninfo_all_blocks=1 00:05:30.330 --rc geninfo_unexecuted_blocks=1 00:05:30.330 00:05:30.330 ' 00:05:30.330 11:39:14 version -- app/version.sh@17 -- # get_header_version major 00:05:30.330 11:39:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.330 11:39:14 version -- app/version.sh@14 -- # cut -f2 00:05:30.330 11:39:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.330 11:39:14 version -- app/version.sh@17 -- # major=25 00:05:30.330 11:39:14 version -- app/version.sh@18 -- # get_header_version minor 00:05:30.331 11:39:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.331 11:39:14 version -- app/version.sh@14 -- # cut -f2 00:05:30.331 11:39:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.331 11:39:14 version -- app/version.sh@18 -- # minor=1 00:05:30.331 11:39:14 version -- app/version.sh@19 -- # get_header_version patch 00:05:30.331 11:39:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.331 11:39:14 version -- app/version.sh@14 -- # cut -f2 00:05:30.331 11:39:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.331 11:39:14 version -- app/version.sh@19 -- # patch=0 00:05:30.331 11:39:14 version -- app/version.sh@20 -- # get_header_version suffix 00:05:30.331 11:39:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.331 11:39:14 version -- app/version.sh@14 -- # cut -f2 00:05:30.331 11:39:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.331 11:39:14 version -- app/version.sh@20 -- # suffix=-pre 00:05:30.331 11:39:14 version -- app/version.sh@22 -- # version=25.1 00:05:30.331 11:39:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:30.331 11:39:14 version -- app/version.sh@28 -- # version=25.1rc0 00:05:30.331 11:39:14 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:30.331 11:39:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:30.331 11:39:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:30.331 11:39:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:30.331 00:05:30.331 real 0m0.277s 00:05:30.331 user 0m0.167s 00:05:30.331 sys 0m0.160s 00:05:30.331 11:39:14 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.331 11:39:14 version -- common/autotest_common.sh@10 -- # set +x 00:05:30.331 ************************************ 00:05:30.331 END TEST version 00:05:30.331 ************************************ 00:05:30.331 11:39:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:30.331 11:39:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:30.331 11:39:14 -- spdk/autotest.sh@194 -- # uname -s 00:05:30.331 11:39:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:30.331 11:39:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:30.331 11:39:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:30.331 11:39:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:30.331 11:39:14 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:30.331 11:39:14 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:30.331 11:39:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:30.331 11:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.331 11:39:14 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:30.331 11:39:14 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:30.331 11:39:14 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:30.331 11:39:14 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:30.331 11:39:14 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:30.331 11:39:14 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:30.331 11:39:14 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:30.331 11:39:14 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:30.331 11:39:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.331 11:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:30.331 ************************************ 00:05:30.331 START TEST nvmf_tcp 00:05:30.331 ************************************ 00:05:30.331 11:39:14 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:30.593 * Looking for test storage... 00:05:30.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.593 11:39:15 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.593 --rc genhtml_branch_coverage=1 00:05:30.593 --rc genhtml_function_coverage=1 00:05:30.593 --rc genhtml_legend=1 00:05:30.593 --rc geninfo_all_blocks=1 00:05:30.593 --rc geninfo_unexecuted_blocks=1 00:05:30.593 00:05:30.593 ' 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.593 --rc genhtml_branch_coverage=1 00:05:30.593 --rc genhtml_function_coverage=1 00:05:30.593 --rc genhtml_legend=1 00:05:30.593 --rc geninfo_all_blocks=1 00:05:30.593 --rc geninfo_unexecuted_blocks=1 00:05:30.593 00:05:30.593 ' 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.593 --rc genhtml_branch_coverage=1 00:05:30.593 --rc genhtml_function_coverage=1 00:05:30.593 --rc genhtml_legend=1 00:05:30.593 --rc geninfo_all_blocks=1 00:05:30.593 --rc geninfo_unexecuted_blocks=1 00:05:30.593 00:05:30.593 ' 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.593 --rc genhtml_branch_coverage=1 00:05:30.593 --rc genhtml_function_coverage=1 00:05:30.593 --rc genhtml_legend=1 00:05:30.593 --rc geninfo_all_blocks=1 00:05:30.593 --rc geninfo_unexecuted_blocks=1 00:05:30.593 00:05:30.593 ' 00:05:30.593 11:39:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:30.593 11:39:15 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:30.593 11:39:15 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.593 11:39:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.593 ************************************ 00:05:30.593 START TEST nvmf_target_core 00:05:30.593 ************************************ 00:05:30.593 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:30.855 * Looking for test storage... 00:05:30.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:30.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.855 --rc genhtml_branch_coverage=1 00:05:30.855 --rc genhtml_function_coverage=1 00:05:30.855 --rc genhtml_legend=1 00:05:30.855 --rc geninfo_all_blocks=1 00:05:30.855 --rc geninfo_unexecuted_blocks=1 00:05:30.855 00:05:30.855 ' 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:30.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.855 --rc genhtml_branch_coverage=1 00:05:30.855 --rc genhtml_function_coverage=1 00:05:30.855 --rc genhtml_legend=1 00:05:30.855 --rc geninfo_all_blocks=1 00:05:30.855 --rc geninfo_unexecuted_blocks=1 00:05:30.855 00:05:30.855 ' 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:30.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.855 --rc genhtml_branch_coverage=1 00:05:30.855 --rc genhtml_function_coverage=1 00:05:30.855 --rc genhtml_legend=1 00:05:30.855 --rc geninfo_all_blocks=1 00:05:30.855 --rc geninfo_unexecuted_blocks=1 00:05:30.855 00:05:30.855 ' 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:30.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.855 --rc genhtml_branch_coverage=1 00:05:30.855 --rc genhtml_function_coverage=1 00:05:30.855 --rc genhtml_legend=1 00:05:30.855 --rc geninfo_all_blocks=1 00:05:30.855 --rc geninfo_unexecuted_blocks=1 00:05:30.855 00:05:30.855 ' 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.855 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:30.856 ************************************ 00:05:30.856 START TEST nvmf_abort 00:05:30.856 ************************************ 00:05:30.856 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:31.118 * Looking for test storage... 00:05:31.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.118 --rc genhtml_branch_coverage=1 00:05:31.118 --rc genhtml_function_coverage=1 00:05:31.118 --rc genhtml_legend=1 00:05:31.118 --rc geninfo_all_blocks=1 00:05:31.118 --rc geninfo_unexecuted_blocks=1 00:05:31.118 00:05:31.118 ' 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.118 --rc genhtml_branch_coverage=1 00:05:31.118 --rc genhtml_function_coverage=1 00:05:31.118 --rc genhtml_legend=1 00:05:31.118 --rc geninfo_all_blocks=1 00:05:31.118 --rc geninfo_unexecuted_blocks=1 00:05:31.118 00:05:31.118 ' 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.118 --rc genhtml_branch_coverage=1 00:05:31.118 --rc genhtml_function_coverage=1 00:05:31.118 --rc genhtml_legend=1 00:05:31.118 --rc geninfo_all_blocks=1 00:05:31.118 --rc geninfo_unexecuted_blocks=1 00:05:31.118 00:05:31.118 ' 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.118 --rc genhtml_branch_coverage=1 00:05:31.118 --rc genhtml_function_coverage=1 00:05:31.118 --rc genhtml_legend=1 00:05:31.118 --rc geninfo_all_blocks=1 00:05:31.118 --rc geninfo_unexecuted_blocks=1 00:05:31.118 00:05:31.118 ' 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.118 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:31.119 11:39:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.273 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:39.273 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:39.273 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:39.273 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:39.273 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:39.273 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:39.273 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:39.273 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:39.273 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:39.274 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:39.274 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:39.274 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:39.274 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:39.274 11:39:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:39.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:39.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:05:39.274 00:05:39.274 --- 10.0.0.2 ping statistics --- 00:05:39.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.274 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:39.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:39.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:05:39.274 00:05:39.274 --- 10.0.0.1 ping statistics --- 00:05:39.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.274 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=802072 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 802072 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 802072 ']' 00:05:39.274 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.275 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.275 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.275 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.275 11:39:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.275 [2024-10-11 11:39:23.209730] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:39.275 [2024-10-11 11:39:23.209803] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:39.275 [2024-10-11 11:39:23.301070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.275 [2024-10-11 11:39:23.354123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:39.275 [2024-10-11 11:39:23.354171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:39.275 [2024-10-11 11:39:23.354180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.275 [2024-10-11 11:39:23.354187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.275 [2024-10-11 11:39:23.354193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:39.275 [2024-10-11 11:39:23.356001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.275 [2024-10-11 11:39:23.356163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.275 [2024-10-11 11:39:23.356164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.536 [2024-10-11 11:39:24.086812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.536 Malloc0 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.536 Delay0 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.536 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.797 [2024-10-11 11:39:24.173201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:39.797 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.797 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:39.797 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.797 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.797 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.797 11:39:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:39.797 [2024-10-11 11:39:24.303878] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:42.345 Initializing NVMe Controllers 00:05:42.345 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:42.345 controller IO queue size 128 less than required 00:05:42.345 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:42.345 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:42.345 Initialization complete. Launching workers. 00:05:42.345 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28538 00:05:42.345 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28599, failed to submit 62 00:05:42.345 success 28542, unsuccessful 57, failed 0 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:42.345 rmmod nvme_tcp 00:05:42.345 rmmod nvme_fabrics 00:05:42.345 rmmod nvme_keyring 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 802072 ']' 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 802072 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 802072 ']' 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 802072 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 802072 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 802072' 00:05:42.345 killing process with pid 802072 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 802072 00:05:42.345 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 802072 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:42.346 11:39:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:44.260 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:44.260 00:05:44.260 real 0m13.368s 00:05:44.260 user 0m14.146s 00:05:44.260 sys 0m6.617s 00:05:44.260 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.260 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.260 ************************************ 00:05:44.260 END TEST nvmf_abort 00:05:44.260 ************************************ 00:05:44.260 11:39:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:44.260 11:39:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:44.260 11:39:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.260 11:39:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:44.521 ************************************ 00:05:44.521 START TEST nvmf_ns_hotplug_stress 00:05:44.521 ************************************ 00:05:44.521 11:39:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:44.521 * Looking for test storage... 00:05:44.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.521 --rc genhtml_branch_coverage=1 00:05:44.521 --rc genhtml_function_coverage=1 00:05:44.521 --rc genhtml_legend=1 00:05:44.521 --rc geninfo_all_blocks=1 00:05:44.521 --rc geninfo_unexecuted_blocks=1 00:05:44.521 00:05:44.521 ' 00:05:44.521 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.522 --rc genhtml_branch_coverage=1 00:05:44.522 --rc genhtml_function_coverage=1 00:05:44.522 --rc genhtml_legend=1 00:05:44.522 --rc geninfo_all_blocks=1 00:05:44.522 --rc geninfo_unexecuted_blocks=1 00:05:44.522 00:05:44.522 ' 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.522 --rc genhtml_branch_coverage=1 00:05:44.522 --rc genhtml_function_coverage=1 00:05:44.522 --rc genhtml_legend=1 00:05:44.522 --rc geninfo_all_blocks=1 00:05:44.522 --rc geninfo_unexecuted_blocks=1 00:05:44.522 00:05:44.522 ' 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.522 --rc genhtml_branch_coverage=1 00:05:44.522 --rc genhtml_function_coverage=1 00:05:44.522 --rc genhtml_legend=1 00:05:44.522 --rc geninfo_all_blocks=1 00:05:44.522 --rc geninfo_unexecuted_blocks=1 00:05:44.522 00:05:44.522 ' 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.522 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:44.784 11:39:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.927 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:52.928 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:52.928 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:52.928 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:52.928 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:52.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:52.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:05:52.928 00:05:52.928 --- 10.0.0.2 ping statistics --- 00:05:52.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.928 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:05:52.928 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:52.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:52.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:05:52.928 00:05:52.928 --- 10.0.0.1 ping statistics --- 00:05:52.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:52.928 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=806931 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 806931 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 806931 ']' 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.929 11:39:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.929 [2024-10-11 11:39:36.614333] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:05:52.929 [2024-10-11 11:39:36.614399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:52.929 [2024-10-11 11:39:36.703845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.929 [2024-10-11 11:39:36.761353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:52.929 [2024-10-11 11:39:36.761403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:52.929 [2024-10-11 11:39:36.761411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:52.929 [2024-10-11 11:39:36.761418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:52.929 [2024-10-11 11:39:36.761425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:52.929 [2024-10-11 11:39:36.763271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.929 [2024-10-11 11:39:36.763433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.929 [2024-10-11 11:39:36.763433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.929 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.929 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:52.929 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:52.929 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.929 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.929 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:52.929 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:52.929 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:53.190 [2024-10-11 11:39:37.659417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.190 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:53.451 11:39:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:53.451 [2024-10-11 11:39:38.050599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:53.714 11:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:53.714 11:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:53.975 Malloc0 00:05:53.975 11:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.236 Delay0 00:05:54.236 11:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.496 11:39:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:54.496 NULL1 00:05:54.496 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:54.757 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:54.757 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=807503 00:05:54.757 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:54.757 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.018 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.018 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:55.018 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:55.278 true 00:05:55.278 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:55.278 11:39:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.539 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.800 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:55.800 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:55.800 true 00:05:55.800 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:55.800 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.060 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.321 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:56.321 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:56.321 true 00:05:56.321 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:56.321 11:39:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.582 11:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.842 11:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:56.842 11:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:56.842 true 00:05:56.842 11:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:56.842 11:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.103 11:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.364 11:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:57.364 11:39:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:57.364 true 00:05:57.624 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:57.624 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.624 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.884 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:57.884 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:58.144 true 00:05:58.144 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:58.144 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.144 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.405 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:58.405 11:39:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:58.665 true 00:05:58.665 11:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:58.665 11:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.665 11:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.925 11:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:58.925 11:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:59.185 true 00:05:59.185 11:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:59.185 11:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.446 11:39:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.446 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:59.446 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:59.706 true 00:05:59.706 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:05:59.706 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.966 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.966 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:59.966 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:00.228 true 00:06:00.228 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:00.228 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.489 11:39:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.489 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:00.489 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:00.750 true 00:06:00.750 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:00.750 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.011 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.272 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:01.272 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:01.272 true 00:06:01.272 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:01.272 11:39:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.533 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.793 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:01.793 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:01.793 true 00:06:01.793 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:01.793 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.054 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.315 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:02.315 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:02.315 true 00:06:02.315 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:02.315 11:39:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.576 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.836 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:02.836 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:02.836 true 00:06:02.836 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:02.836 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.097 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.358 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:03.358 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:03.358 true 00:06:03.619 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:03.619 11:39:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.619 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.880 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:03.880 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:04.141 true 00:06:04.141 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:04.141 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.141 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.402 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:04.402 11:39:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:04.663 true 00:06:04.663 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:04.663 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.924 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.924 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:04.924 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:05.184 true 00:06:05.184 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:05.184 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.445 11:39:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.445 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:05.445 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:05.708 true 00:06:05.708 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:05.708 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.971 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.971 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:05.971 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:06.232 true 00:06:06.232 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:06.232 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.493 11:39:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.754 11:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:06.754 11:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:06.754 true 00:06:06.754 11:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:06.754 11:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.015 11:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.275 11:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:07.275 11:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:07.275 true 00:06:07.275 11:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:07.275 11:39:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.536 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.797 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:07.797 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:07.797 true 00:06:07.797 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:07.797 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.057 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.318 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:08.318 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:08.318 true 00:06:08.318 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:08.318 11:39:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.578 11:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.838 11:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:08.838 11:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:08.838 true 00:06:08.839 11:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:08.839 11:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.099 11:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.359 11:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:09.359 11:39:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:09.620 true 00:06:09.620 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:09.620 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.620 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.880 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:09.880 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:10.140 true 00:06:10.140 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:10.140 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.140 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.401 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:10.401 11:39:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:10.662 true 00:06:10.662 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:10.662 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.662 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.923 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:10.923 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:11.183 true 00:06:11.184 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:11.184 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.450 11:39:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.451 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:11.451 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:11.711 true 00:06:11.711 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:11.711 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.972 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.972 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:11.972 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:12.234 true 00:06:12.234 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:12.234 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.495 11:39:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.495 11:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:12.495 11:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:12.756 true 00:06:12.756 11:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:12.756 11:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.017 11:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.277 11:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:13.277 11:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:13.277 true 00:06:13.277 11:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:13.277 11:39:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.538 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.798 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:13.799 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:13.799 true 00:06:13.799 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:13.799 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.060 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.321 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:14.321 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:14.321 true 00:06:14.581 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:14.581 11:39:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.581 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.842 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:14.842 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:14.842 true 00:06:15.103 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:15.103 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.103 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.363 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:15.363 11:39:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:15.624 true 00:06:15.624 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:15.624 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.624 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.885 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:15.885 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:16.146 true 00:06:16.146 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:16.146 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.407 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.407 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:16.407 11:40:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:16.668 true 00:06:16.668 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:16.668 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.929 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.929 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:16.929 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:17.190 true 00:06:17.190 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:17.190 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.451 11:40:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.451 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:17.451 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:17.712 true 00:06:17.712 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:17.712 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.972 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.233 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:18.233 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:18.233 true 00:06:18.233 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:18.233 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.493 11:40:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.754 11:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:18.754 11:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:18.754 true 00:06:18.754 11:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:18.754 11:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.015 11:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.276 11:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:19.276 11:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:19.276 true 00:06:19.276 11:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:19.276 11:40:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.537 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.797 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:19.797 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:19.797 true 00:06:19.797 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:19.797 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.058 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.319 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:20.319 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:20.319 true 00:06:20.579 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:20.579 11:40:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.579 11:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.839 11:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:20.839 11:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:21.100 true 00:06:21.100 11:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:21.100 11:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.100 11:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.360 11:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:21.360 11:40:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:21.620 true 00:06:21.621 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:21.621 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.621 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.880 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:21.880 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:22.140 true 00:06:22.140 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:22.140 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.400 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.400 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:22.400 11:40:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:22.661 true 00:06:22.661 11:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:22.661 11:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.921 11:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.921 11:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:22.921 11:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:23.182 true 00:06:23.182 11:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:23.182 11:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.441 11:40:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.702 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:23.702 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:23.702 true 00:06:23.702 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:23.702 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.964 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.224 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:24.224 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:24.224 true 00:06:24.224 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:24.224 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.484 11:40:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.745 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:24.745 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:24.745 true 00:06:24.745 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:24.745 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.006 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.006 Initializing NVMe Controllers 00:06:25.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:25.006 Controller IO queue size 128, less than required. 00:06:25.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:25.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:25.006 Initialization complete. Launching workers. 00:06:25.006 ======================================================== 00:06:25.006 Latency(us) 00:06:25.006 Device Information : IOPS MiB/s Average min max 00:06:25.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30917.12 15.10 4140.11 1143.09 44564.72 00:06:25.006 ======================================================== 00:06:25.006 Total : 30917.12 15.10 4140.11 1143.09 44564.72 00:06:25.006 00:06:25.266 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:25.266 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:25.266 true 00:06:25.527 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 807503 00:06:25.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (807503) - No such process 00:06:25.527 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 807503 00:06:25.527 11:40:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.527 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:25.787 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:25.787 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:25.787 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:25.787 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:25.787 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:25.787 null0 00:06:26.048 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.048 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.048 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:26.048 null1 00:06:26.048 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.048 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.048 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:26.309 null2 00:06:26.309 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.309 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.309 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:26.570 null3 00:06:26.570 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.570 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.570 11:40:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:26.570 null4 00:06:26.570 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.570 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.570 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:26.832 null5 00:06:26.832 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:26.832 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:26.832 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:27.093 null6 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:27.093 null7 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:27.093 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 814052 814054 814055 814057 814059 814061 814063 814064 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:27.355 11:40:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.617 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:27.877 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.136 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.396 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.396 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.397 11:40:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.657 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.918 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.919 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.919 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.919 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.919 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:28.919 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.919 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.919 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:28.919 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.919 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.180 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.441 11:40:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.441 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.441 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.441 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.441 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.441 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.441 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.441 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.701 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.702 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.702 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.702 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.702 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.702 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.962 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.223 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.482 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.483 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.483 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.483 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.483 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.483 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.483 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.483 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.483 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.483 11:40:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.483 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.743 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.744 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.744 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.004 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:31.265 rmmod nvme_tcp 00:06:31.265 rmmod nvme_fabrics 00:06:31.265 rmmod nvme_keyring 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 806931 ']' 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 806931 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 806931 ']' 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 806931 00:06:31.265 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:31.266 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.266 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 806931 00:06:31.266 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:31.266 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:31.266 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 806931' 00:06:31.266 killing process with pid 806931 00:06:31.266 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 806931 00:06:31.266 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 806931 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.526 11:40:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.440 11:40:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:33.440 00:06:33.440 real 0m49.081s 00:06:33.440 user 3m21.410s 00:06:33.440 sys 0m17.353s 00:06:33.440 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.440 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.440 ************************************ 00:06:33.440 END TEST nvmf_ns_hotplug_stress 00:06:33.440 ************************************ 00:06:33.440 11:40:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:33.440 11:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:33.440 11:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.440 11:40:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.705 ************************************ 00:06:33.705 START TEST nvmf_delete_subsystem 00:06:33.705 ************************************ 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:33.705 * Looking for test storage... 00:06:33.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.705 --rc genhtml_branch_coverage=1 00:06:33.705 --rc genhtml_function_coverage=1 00:06:33.705 --rc genhtml_legend=1 00:06:33.705 --rc geninfo_all_blocks=1 00:06:33.705 --rc geninfo_unexecuted_blocks=1 00:06:33.705 00:06:33.705 ' 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.705 --rc genhtml_branch_coverage=1 00:06:33.705 --rc genhtml_function_coverage=1 00:06:33.705 --rc genhtml_legend=1 00:06:33.705 --rc geninfo_all_blocks=1 00:06:33.705 --rc geninfo_unexecuted_blocks=1 00:06:33.705 00:06:33.705 ' 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.705 --rc genhtml_branch_coverage=1 00:06:33.705 --rc genhtml_function_coverage=1 00:06:33.705 --rc genhtml_legend=1 00:06:33.705 --rc geninfo_all_blocks=1 00:06:33.705 --rc geninfo_unexecuted_blocks=1 00:06:33.705 00:06:33.705 ' 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.705 --rc genhtml_branch_coverage=1 00:06:33.705 --rc genhtml_function_coverage=1 00:06:33.705 --rc genhtml_legend=1 00:06:33.705 --rc geninfo_all_blocks=1 00:06:33.705 --rc geninfo_unexecuted_blocks=1 00:06:33.705 00:06:33.705 ' 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.705 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:33.706 11:40:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:41.852 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:41.852 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:41.852 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:41.852 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.852 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:41.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:06:41.853 00:06:41.853 --- 10.0.0.2 ping statistics --- 00:06:41.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.853 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:06:41.853 00:06:41.853 --- 10.0.0.1 ping statistics --- 00:06:41.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.853 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=819296 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 819296 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 819296 ']' 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.853 11:40:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.853 [2024-10-11 11:40:25.824023] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:06:41.853 [2024-10-11 11:40:25.824085] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.853 [2024-10-11 11:40:25.912493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.853 [2024-10-11 11:40:25.964926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.853 [2024-10-11 11:40:25.964981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.853 [2024-10-11 11:40:25.964995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.853 [2024-10-11 11:40:25.965002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.853 [2024-10-11 11:40:25.965009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.853 [2024-10-11 11:40:25.966602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.853 [2024-10-11 11:40:25.966603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.114 [2024-10-11 11:40:26.701619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.114 [2024-10-11 11:40:26.725970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.114 NULL1 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.114 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.375 Delay0 00:06:42.375 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.375 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.375 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.375 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.375 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.375 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=819588 00:06:42.375 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:42.375 11:40:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:42.375 [2024-10-11 11:40:26.842893] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:44.291 11:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:44.291 11:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.291 11:40:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.558 Read completed with error (sct=0, sc=8) 00:06:44.558 Read completed with error (sct=0, sc=8) 00:06:44.558 Write completed with error (sct=0, sc=8) 00:06:44.558 starting I/O failed: -6 00:06:44.558 Read completed with error (sct=0, sc=8) 00:06:44.558 Read completed with error (sct=0, sc=8) 00:06:44.558 Write completed with error (sct=0, sc=8) 00:06:44.558 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 [2024-10-11 11:40:28.927075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2468520 is same with the state(6) to be set 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 [2024-10-11 11:40:28.928779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466bc0 is same with the state(6) to be set 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 starting I/O failed: -6 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 [2024-10-11 11:40:28.932594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5cc000d450 is same with the state(6) to be set 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Write completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.559 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Write completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Write completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Write completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Write completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Write completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Write completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:44.560 Read completed with error (sct=0, sc=8) 00:06:45.502 [2024-10-11 11:40:29.900759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24690a0 is same with the state(6) to be set 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Write completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Write completed with error (sct=0, sc=8) 00:06:45.502 Write completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Write completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Write completed with error (sct=0, sc=8) 00:06:45.502 Write completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Write completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 [2024-10-11 11:40:29.930342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2466da0 is same with the state(6) to be set 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Write completed with error (sct=0, sc=8) 00:06:45.502 Write completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.502 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 [2024-10-11 11:40:29.930714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24669e0 is same with the state(6) to be set 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 [2024-10-11 11:40:29.934453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5cc000cfe0 is same with the state(6) to be set 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Write completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 Read completed with error (sct=0, sc=8) 00:06:45.503 [2024-10-11 11:40:29.934592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5cc000d780 is same with the state(6) to be set 00:06:45.503 Initializing NVMe Controllers 00:06:45.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:45.503 Controller IO queue size 128, less than required. 00:06:45.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:45.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:45.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:45.503 Initialization complete. Launching workers. 00:06:45.503 ======================================================== 00:06:45.503 Latency(us) 00:06:45.503 Device Information : IOPS MiB/s Average min max 00:06:45.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.37 0.08 906354.41 484.27 1006614.26 00:06:45.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.40 0.08 1039863.07 284.39 2002575.20 00:06:45.503 ======================================================== 00:06:45.503 Total : 320.77 0.16 971450.25 284.39 2002575.20 00:06:45.503 00:06:45.503 [2024-10-11 11:40:29.935147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24690a0 (9): Bad file descriptor 00:06:45.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:45.503 11:40:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.503 11:40:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:45.503 11:40:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 819588 00:06:45.503 11:40:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 819588 00:06:46.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (819588) - No such process 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 819588 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 819588 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 819588 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.074 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.075 [2024-10-11 11:40:30.465031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=820270 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820270 00:06:46.075 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.075 [2024-10-11 11:40:30.554336] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:46.647 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.647 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820270 00:06:46.647 11:40:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.907 11:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:46.907 11:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820270 00:06:46.907 11:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.477 11:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.477 11:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820270 00:06:47.477 11:40:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.047 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:48.047 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820270 00:06:48.047 11:40:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.617 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:48.617 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820270 00:06:48.617 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.186 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.187 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820270 00:06:49.187 11:40:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.187 Initializing NVMe Controllers 00:06:49.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:49.187 Controller IO queue size 128, less than required. 00:06:49.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:49.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:49.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:49.187 Initialization complete. Launching workers. 00:06:49.187 ======================================================== 00:06:49.187 Latency(us) 00:06:49.187 Device Information : IOPS MiB/s Average min max 00:06:49.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001993.05 1000131.51 1005348.80 00:06:49.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003075.96 1000229.82 1007946.71 00:06:49.187 ======================================================== 00:06:49.187 Total : 256.00 0.12 1002534.50 1000131.51 1007946.71 00:06:49.187 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 820270 00:06:49.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (820270) - No such process 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 820270 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:49.447 rmmod nvme_tcp 00:06:49.447 rmmod nvme_fabrics 00:06:49.447 rmmod nvme_keyring 00:06:49.447 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:49.448 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:49.448 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:49.448 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 819296 ']' 00:06:49.448 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 819296 00:06:49.448 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 819296 ']' 00:06:49.448 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 819296 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 819296 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 819296' 00:06:49.708 killing process with pid 819296 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 819296 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 819296 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.708 11:40:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:52.253 00:06:52.253 real 0m18.250s 00:06:52.253 user 0m30.750s 00:06:52.253 sys 0m6.748s 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.253 ************************************ 00:06:52.253 END TEST nvmf_delete_subsystem 00:06:52.253 ************************************ 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.253 ************************************ 00:06:52.253 START TEST nvmf_host_management 00:06:52.253 ************************************ 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:52.253 * Looking for test storage... 00:06:52.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.253 --rc genhtml_branch_coverage=1 00:06:52.253 --rc genhtml_function_coverage=1 00:06:52.253 --rc genhtml_legend=1 00:06:52.253 --rc geninfo_all_blocks=1 00:06:52.253 --rc geninfo_unexecuted_blocks=1 00:06:52.253 00:06:52.253 ' 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.253 --rc genhtml_branch_coverage=1 00:06:52.253 --rc genhtml_function_coverage=1 00:06:52.253 --rc genhtml_legend=1 00:06:52.253 --rc geninfo_all_blocks=1 00:06:52.253 --rc geninfo_unexecuted_blocks=1 00:06:52.253 00:06:52.253 ' 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.253 --rc genhtml_branch_coverage=1 00:06:52.253 --rc genhtml_function_coverage=1 00:06:52.253 --rc genhtml_legend=1 00:06:52.253 --rc geninfo_all_blocks=1 00:06:52.253 --rc geninfo_unexecuted_blocks=1 00:06:52.253 00:06:52.253 ' 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.253 --rc genhtml_branch_coverage=1 00:06:52.253 --rc genhtml_function_coverage=1 00:06:52.253 --rc genhtml_legend=1 00:06:52.253 --rc geninfo_all_blocks=1 00:06:52.253 --rc geninfo_unexecuted_blocks=1 00:06:52.253 00:06:52.253 ' 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.253 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:52.254 11:40:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:00.396 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:00.396 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:00.396 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:00.397 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:00.397 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:00.397 11:40:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:00.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:07:00.397 00:07:00.397 --- 10.0.0.2 ping statistics --- 00:07:00.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.397 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:00.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:07:00.397 00:07:00.397 --- 10.0.0.1 ping statistics --- 00:07:00.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.397 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=825285 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 825285 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 825285 ']' 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.397 11:40:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.397 [2024-10-11 11:40:44.200943] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:00.397 [2024-10-11 11:40:44.201013] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.397 [2024-10-11 11:40:44.288830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.397 [2024-10-11 11:40:44.341697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:00.397 [2024-10-11 11:40:44.341744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:00.397 [2024-10-11 11:40:44.341752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:00.397 [2024-10-11 11:40:44.341759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:00.397 [2024-10-11 11:40:44.341766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:00.397 [2024-10-11 11:40:44.344033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.397 [2024-10-11 11:40:44.344196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.397 [2024-10-11 11:40:44.344356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.397 [2024-10-11 11:40:44.344356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:00.397 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.397 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:00.397 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:00.397 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.397 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.659 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.660 [2024-10-11 11:40:45.068197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.660 Malloc0 00:07:00.660 [2024-10-11 11:40:45.148455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=825576 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 825576 /var/tmp/bdevperf.sock 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 825576 ']' 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:00.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:00.660 { 00:07:00.660 "params": { 00:07:00.660 "name": "Nvme$subsystem", 00:07:00.660 "trtype": "$TEST_TRANSPORT", 00:07:00.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:00.660 "adrfam": "ipv4", 00:07:00.660 "trsvcid": "$NVMF_PORT", 00:07:00.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:00.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:00.660 "hdgst": ${hdgst:-false}, 00:07:00.660 "ddgst": ${ddgst:-false} 00:07:00.660 }, 00:07:00.660 "method": "bdev_nvme_attach_controller" 00:07:00.660 } 00:07:00.660 EOF 00:07:00.660 )") 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:00.660 11:40:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:00.660 "params": { 00:07:00.660 "name": "Nvme0", 00:07:00.660 "trtype": "tcp", 00:07:00.660 "traddr": "10.0.0.2", 00:07:00.660 "adrfam": "ipv4", 00:07:00.660 "trsvcid": "4420", 00:07:00.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.660 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:00.660 "hdgst": false, 00:07:00.660 "ddgst": false 00:07:00.660 }, 00:07:00.660 "method": "bdev_nvme_attach_controller" 00:07:00.660 }' 00:07:00.660 [2024-10-11 11:40:45.257707] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:00.660 [2024-10-11 11:40:45.257776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid825576 ] 00:07:00.922 [2024-10-11 11:40:45.342516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.922 [2024-10-11 11:40:45.396160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.183 Running I/O for 10 seconds... 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=530 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 530 -ge 100 ']' 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.758 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.758 [2024-10-11 11:40:46.148250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with [2024-10-11 11:40:46.148261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:07:01.758 id:0 cdw10:00000000 cdw11:00000000 00:07:01.758 [2024-10-11 11:40:46.148329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with [2024-10-11 11:40:46.148334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:07:01.758 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.758 [2024-10-11 11:40:46.148354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:01.758 [2024-10-11 11:40:46.148363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.758 [2024-10-11 11:40:46.148372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-10-11 11:40:46.148381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with id:0 cdw10:00000000 cdw11:00000000 00:07:01.758 the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-11 11:40:46.148394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.758 the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-10-11 11:40:46.148409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with id:0 cdw10:00000000 cdw11:00000000 00:07:01.758 the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-11 11:40:46.148424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.758 the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b4500 is same [2024-10-11 11:40:46.148438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with with the state(6) to be set 00:07:01.758 the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.758 [2024-10-11 11:40:46.148732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.148823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ab630 is same with the state(6) to be set 00:07:01.759 [2024-10-11 11:40:46.149078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.759 [2024-10-11 11:40:46.149728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.759 [2024-10-11 11:40:46.149735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.149983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.149994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:01.760 [2024-10-11 11:40:46.150249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:01.760 [2024-10-11 11:40:46.150258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cd7b0 is same with the state(6) to be set 00:07:01.760 [2024-10-11 11:40:46.150327] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19cd7b0 was disconnected and freed. reset controller. 00:07:01.760 [2024-10-11 11:40:46.151572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:01.760 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.760 task offset: 73728 on job bdev=Nvme0n1 fails 00:07:01.760 00:07:01.760 Latency(us) 00:07:01.760 [2024-10-11T09:40:46.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.760 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:01.760 Job: Nvme0n1 ended in about 0.42 seconds with error 00:07:01.760 Verification LBA range: start 0x0 length 0x400 00:07:01.760 Nvme0n1 : 0.42 1366.24 85.39 151.80 0.00 40900.14 7918.93 35607.89 00:07:01.760 [2024-10-11T09:40:46.392Z] =================================================================================================================== 00:07:01.760 [2024-10-11T09:40:46.392Z] Total : 1366.24 85.39 151.80 0.00 40900.14 7918.93 35607.89 00:07:01.760 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:01.760 [2024-10-11 11:40:46.153840] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.760 [2024-10-11 11:40:46.153878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b4500 (9): Bad file descriptor 00:07:01.760 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.760 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.760 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.760 11:40:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:01.760 [2024-10-11 11:40:46.175198] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 825576 00:07:02.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (825576) - No such process 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:02.703 { 00:07:02.703 "params": { 00:07:02.703 "name": "Nvme$subsystem", 00:07:02.703 "trtype": "$TEST_TRANSPORT", 00:07:02.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:02.703 "adrfam": "ipv4", 00:07:02.703 "trsvcid": "$NVMF_PORT", 00:07:02.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:02.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:02.703 "hdgst": ${hdgst:-false}, 00:07:02.703 "ddgst": ${ddgst:-false} 00:07:02.703 }, 00:07:02.703 "method": "bdev_nvme_attach_controller" 00:07:02.703 } 00:07:02.703 EOF 00:07:02.703 )") 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:02.703 11:40:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:02.703 "params": { 00:07:02.703 "name": "Nvme0", 00:07:02.703 "trtype": "tcp", 00:07:02.703 "traddr": "10.0.0.2", 00:07:02.703 "adrfam": "ipv4", 00:07:02.703 "trsvcid": "4420", 00:07:02.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:02.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:02.703 "hdgst": false, 00:07:02.703 "ddgst": false 00:07:02.703 }, 00:07:02.703 "method": "bdev_nvme_attach_controller" 00:07:02.703 }' 00:07:02.703 [2024-10-11 11:40:47.225308] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:02.703 [2024-10-11 11:40:47.225362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826011 ] 00:07:02.703 [2024-10-11 11:40:47.302862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.963 [2024-10-11 11:40:47.337614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.224 Running I/O for 1 seconds... 00:07:04.167 1536.00 IOPS, 96.00 MiB/s 00:07:04.167 Latency(us) 00:07:04.167 [2024-10-11T09:40:48.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:04.167 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:04.167 Verification LBA range: start 0x0 length 0x400 00:07:04.167 Nvme0n1 : 1.01 1588.28 99.27 0.00 0.00 39584.84 6253.23 32986.45 00:07:04.167 [2024-10-11T09:40:48.799Z] =================================================================================================================== 00:07:04.167 [2024-10-11T09:40:48.799Z] Total : 1588.28 99.27 0.00 0.00 39584.84 6253.23 32986.45 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:04.167 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:04.167 rmmod nvme_tcp 00:07:04.167 rmmod nvme_fabrics 00:07:04.167 rmmod nvme_keyring 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 825285 ']' 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 825285 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 825285 ']' 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 825285 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 825285 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 825285' 00:07:04.428 killing process with pid 825285 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 825285 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 825285 00:07:04.428 [2024-10-11 11:40:48.972097] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:04.428 11:40:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:04.428 11:40:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:04.428 11:40:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.428 11:40:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.428 11:40:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:06.975 00:07:06.975 real 0m14.661s 00:07:06.975 user 0m23.388s 00:07:06.975 sys 0m6.708s 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:06.975 ************************************ 00:07:06.975 END TEST nvmf_host_management 00:07:06.975 ************************************ 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.975 ************************************ 00:07:06.975 START TEST nvmf_lvol 00:07:06.975 ************************************ 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:06.975 * Looking for test storage... 00:07:06.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:06.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.975 --rc genhtml_branch_coverage=1 00:07:06.975 --rc genhtml_function_coverage=1 00:07:06.975 --rc genhtml_legend=1 00:07:06.975 --rc geninfo_all_blocks=1 00:07:06.975 --rc geninfo_unexecuted_blocks=1 00:07:06.975 00:07:06.975 ' 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:06.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.975 --rc genhtml_branch_coverage=1 00:07:06.975 --rc genhtml_function_coverage=1 00:07:06.975 --rc genhtml_legend=1 00:07:06.975 --rc geninfo_all_blocks=1 00:07:06.975 --rc geninfo_unexecuted_blocks=1 00:07:06.975 00:07:06.975 ' 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:06.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.975 --rc genhtml_branch_coverage=1 00:07:06.975 --rc genhtml_function_coverage=1 00:07:06.975 --rc genhtml_legend=1 00:07:06.975 --rc geninfo_all_blocks=1 00:07:06.975 --rc geninfo_unexecuted_blocks=1 00:07:06.975 00:07:06.975 ' 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:06.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.975 --rc genhtml_branch_coverage=1 00:07:06.975 --rc genhtml_function_coverage=1 00:07:06.975 --rc genhtml_legend=1 00:07:06.975 --rc geninfo_all_blocks=1 00:07:06.975 --rc geninfo_unexecuted_blocks=1 00:07:06.975 00:07:06.975 ' 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.975 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:06.976 11:40:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:15.117 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:15.117 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:15.117 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:15.117 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.117 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:07:15.118 00:07:15.118 --- 10.0.0.2 ping statistics --- 00:07:15.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.118 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:07:15.118 00:07:15.118 --- 10.0.0.1 ping statistics --- 00:07:15.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.118 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=830551 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 830551 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 830551 ']' 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.118 11:40:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.118 [2024-10-11 11:40:58.983992] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:15.118 [2024-10-11 11:40:58.984057] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.118 [2024-10-11 11:40:59.074635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.118 [2024-10-11 11:40:59.131781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.118 [2024-10-11 11:40:59.131839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.118 [2024-10-11 11:40:59.131852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.118 [2024-10-11 11:40:59.131860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.118 [2024-10-11 11:40:59.131866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.118 [2024-10-11 11:40:59.133775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.118 [2024-10-11 11:40:59.133918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.118 [2024-10-11 11:40:59.133920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.379 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.379 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:15.379 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:15.379 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.379 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.379 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.379 11:40:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:15.640 [2024-10-11 11:41:00.018333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.640 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.902 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:15.902 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:15.902 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:15.902 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:16.164 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:16.424 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2ac76487-f808-4211-a067-b8a108cf569e 00:07:16.425 11:41:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2ac76487-f808-4211-a067-b8a108cf569e lvol 20 00:07:16.686 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=35180684-3cb6-44d6-b208-dd8c1c934cec 00:07:16.686 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:16.686 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 35180684-3cb6-44d6-b208-dd8c1c934cec 00:07:16.947 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:17.207 [2024-10-11 11:41:01.678155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.207 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.468 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=831104 00:07:17.468 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:17.468 11:41:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:18.409 11:41:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 35180684-3cb6-44d6-b208-dd8c1c934cec MY_SNAPSHOT 00:07:18.669 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a2942391-7b9e-4378-bdb6-c428a1958b11 00:07:18.669 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 35180684-3cb6-44d6-b208-dd8c1c934cec 30 00:07:18.930 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a2942391-7b9e-4378-bdb6-c428a1958b11 MY_CLONE 00:07:18.930 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=63636924-4253-499b-ae0a-1c41db1a7a6b 00:07:18.930 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 63636924-4253-499b-ae0a-1c41db1a7a6b 00:07:19.500 11:41:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 831104 00:07:27.871 Initializing NVMe Controllers 00:07:27.871 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:27.871 Controller IO queue size 128, less than required. 00:07:27.871 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:27.871 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:27.871 Initialization complete. Launching workers. 00:07:27.871 ======================================================== 00:07:27.871 Latency(us) 00:07:27.871 Device Information : IOPS MiB/s Average min max 00:07:27.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16382.50 63.99 7816.62 1568.18 62640.82 00:07:27.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16305.30 63.69 7852.22 906.05 55789.03 00:07:27.871 ======================================================== 00:07:27.871 Total : 32687.80 127.69 7834.38 906.05 62640.82 00:07:27.871 00:07:27.871 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.871 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 35180684-3cb6-44d6-b208-dd8c1c934cec 00:07:28.132 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ac76487-f808-4211-a067-b8a108cf569e 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:28.393 rmmod nvme_tcp 00:07:28.393 rmmod nvme_fabrics 00:07:28.393 rmmod nvme_keyring 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 830551 ']' 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 830551 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 830551 ']' 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 830551 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:28.393 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.394 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 830551 00:07:28.394 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.394 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.394 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 830551' 00:07:28.394 killing process with pid 830551 00:07:28.394 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 830551 00:07:28.394 11:41:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 830551 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.655 11:41:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.569 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:30.569 00:07:30.569 real 0m24.014s 00:07:30.569 user 1m5.203s 00:07:30.569 sys 0m8.555s 00:07:30.569 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.569 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.569 ************************************ 00:07:30.569 END TEST nvmf_lvol 00:07:30.569 ************************************ 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.830 ************************************ 00:07:30.830 START TEST nvmf_lvs_grow 00:07:30.830 ************************************ 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:30.830 * Looking for test storage... 00:07:30.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:30.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.830 --rc genhtml_branch_coverage=1 00:07:30.830 --rc genhtml_function_coverage=1 00:07:30.830 --rc genhtml_legend=1 00:07:30.830 --rc geninfo_all_blocks=1 00:07:30.830 --rc geninfo_unexecuted_blocks=1 00:07:30.830 00:07:30.830 ' 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:30.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.830 --rc genhtml_branch_coverage=1 00:07:30.830 --rc genhtml_function_coverage=1 00:07:30.830 --rc genhtml_legend=1 00:07:30.830 --rc geninfo_all_blocks=1 00:07:30.830 --rc geninfo_unexecuted_blocks=1 00:07:30.830 00:07:30.830 ' 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:30.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.830 --rc genhtml_branch_coverage=1 00:07:30.830 --rc genhtml_function_coverage=1 00:07:30.830 --rc genhtml_legend=1 00:07:30.830 --rc geninfo_all_blocks=1 00:07:30.830 --rc geninfo_unexecuted_blocks=1 00:07:30.830 00:07:30.830 ' 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:30.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.830 --rc genhtml_branch_coverage=1 00:07:30.830 --rc genhtml_function_coverage=1 00:07:30.830 --rc genhtml_legend=1 00:07:30.830 --rc geninfo_all_blocks=1 00:07:30.830 --rc geninfo_unexecuted_blocks=1 00:07:30.830 00:07:30.830 ' 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.830 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.091 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.092 11:41:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:39.230 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.230 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:39.230 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:39.231 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:39.231 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:07:39.231 00:07:39.231 --- 10.0.0.2 ping statistics --- 00:07:39.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.231 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:07:39.231 00:07:39.231 --- 10.0.0.1 ping statistics --- 00:07:39.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.231 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=837608 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 837608 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 837608 ']' 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.231 11:41:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 [2024-10-11 11:41:22.926983] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:39.231 [2024-10-11 11:41:22.927049] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.231 [2024-10-11 11:41:23.014414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.231 [2024-10-11 11:41:23.066422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.231 [2024-10-11 11:41:23.066468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.231 [2024-10-11 11:41:23.066476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.231 [2024-10-11 11:41:23.066483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.231 [2024-10-11 11:41:23.066489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.231 [2024-10-11 11:41:23.067241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.231 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.231 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:39.231 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:39.231 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.231 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.231 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:39.492 [2024-10-11 11:41:23.942322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.492 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:39.492 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.492 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.492 11:41:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:39.492 ************************************ 00:07:39.492 START TEST lvs_grow_clean 00:07:39.492 ************************************ 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:39.492 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.754 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:39.754 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:40.015 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:40.015 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:40.015 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:40.015 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:40.015 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:40.015 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 lvol 150 00:07:40.276 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=00d04520-41de-411d-b7f3-0ad3f28aba19 00:07:40.276 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:40.276 11:41:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:40.536 [2024-10-11 11:41:24.996159] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:40.536 [2024-10-11 11:41:24.996233] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:40.536 true 00:07:40.536 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:40.536 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:40.797 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:40.797 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:40.797 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 00d04520-41de-411d-b7f3-0ad3f28aba19 00:07:41.058 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:41.320 [2024-10-11 11:41:25.702389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=838198 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 838198 /var/tmp/bdevperf.sock 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 838198 ']' 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:41.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.320 11:41:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:41.320 [2024-10-11 11:41:25.946853] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:41.320 [2024-10-11 11:41:25.946926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838198 ] 00:07:41.580 [2024-10-11 11:41:26.030089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.580 [2024-10-11 11:41:26.083052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.152 11:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.152 11:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:42.152 11:41:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:42.413 Nvme0n1 00:07:42.673 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:42.673 [ 00:07:42.673 { 00:07:42.673 "name": "Nvme0n1", 00:07:42.673 "aliases": [ 00:07:42.673 "00d04520-41de-411d-b7f3-0ad3f28aba19" 00:07:42.673 ], 00:07:42.673 "product_name": "NVMe disk", 00:07:42.673 "block_size": 4096, 00:07:42.673 "num_blocks": 38912, 00:07:42.673 "uuid": "00d04520-41de-411d-b7f3-0ad3f28aba19", 00:07:42.673 "numa_id": 0, 00:07:42.673 "assigned_rate_limits": { 00:07:42.673 "rw_ios_per_sec": 0, 00:07:42.673 "rw_mbytes_per_sec": 0, 00:07:42.673 "r_mbytes_per_sec": 0, 00:07:42.673 "w_mbytes_per_sec": 0 00:07:42.673 }, 00:07:42.673 "claimed": false, 00:07:42.673 "zoned": false, 00:07:42.673 "supported_io_types": { 00:07:42.673 "read": true, 00:07:42.673 "write": true, 00:07:42.673 "unmap": true, 00:07:42.673 "flush": true, 00:07:42.673 "reset": true, 00:07:42.673 "nvme_admin": true, 00:07:42.673 "nvme_io": true, 00:07:42.673 "nvme_io_md": false, 00:07:42.673 "write_zeroes": true, 00:07:42.673 "zcopy": false, 00:07:42.673 "get_zone_info": false, 00:07:42.673 "zone_management": false, 00:07:42.673 "zone_append": false, 00:07:42.673 "compare": true, 00:07:42.673 "compare_and_write": true, 00:07:42.673 "abort": true, 00:07:42.673 "seek_hole": false, 00:07:42.673 "seek_data": false, 00:07:42.673 "copy": true, 00:07:42.673 "nvme_iov_md": false 00:07:42.673 }, 00:07:42.673 "memory_domains": [ 00:07:42.673 { 00:07:42.673 "dma_device_id": "system", 00:07:42.673 "dma_device_type": 1 00:07:42.673 } 00:07:42.673 ], 00:07:42.673 "driver_specific": { 00:07:42.673 "nvme": [ 00:07:42.673 { 00:07:42.673 "trid": { 00:07:42.673 "trtype": "TCP", 00:07:42.673 "adrfam": "IPv4", 00:07:42.673 "traddr": "10.0.0.2", 00:07:42.673 "trsvcid": "4420", 00:07:42.673 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:42.673 }, 00:07:42.673 "ctrlr_data": { 00:07:42.673 "cntlid": 1, 00:07:42.673 "vendor_id": "0x8086", 00:07:42.673 "model_number": "SPDK bdev Controller", 00:07:42.673 "serial_number": "SPDK0", 00:07:42.674 "firmware_revision": "25.01", 00:07:42.674 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:42.674 "oacs": { 00:07:42.674 "security": 0, 00:07:42.674 "format": 0, 00:07:42.674 "firmware": 0, 00:07:42.674 "ns_manage": 0 00:07:42.674 }, 00:07:42.674 "multi_ctrlr": true, 00:07:42.674 "ana_reporting": false 00:07:42.674 }, 00:07:42.674 "vs": { 00:07:42.674 "nvme_version": "1.3" 00:07:42.674 }, 00:07:42.674 "ns_data": { 00:07:42.674 "id": 1, 00:07:42.674 "can_share": true 00:07:42.674 } 00:07:42.674 } 00:07:42.674 ], 00:07:42.674 "mp_policy": "active_passive" 00:07:42.674 } 00:07:42.674 } 00:07:42.674 ] 00:07:42.674 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=838534 00:07:42.674 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:42.674 11:41:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:42.934 Running I/O for 10 seconds... 00:07:43.875 Latency(us) 00:07:43.875 [2024-10-11T09:41:28.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.875 Nvme0n1 : 1.00 25175.00 98.34 0.00 0.00 0.00 0.00 0.00 00:07:43.875 [2024-10-11T09:41:28.507Z] =================================================================================================================== 00:07:43.875 [2024-10-11T09:41:28.507Z] Total : 25175.00 98.34 0.00 0.00 0.00 0.00 0.00 00:07:43.875 00:07:44.816 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:44.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.816 Nvme0n1 : 2.00 25328.00 98.94 0.00 0.00 0.00 0.00 0.00 00:07:44.816 [2024-10-11T09:41:29.448Z] =================================================================================================================== 00:07:44.816 [2024-10-11T09:41:29.448Z] Total : 25328.00 98.94 0.00 0.00 0.00 0.00 0.00 00:07:44.816 00:07:44.816 true 00:07:44.816 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:44.816 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:45.076 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:45.076 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:45.076 11:41:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 838534 00:07:46.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.016 Nvme0n1 : 3.00 25190.33 98.40 0.00 0.00 0.00 0.00 0.00 00:07:46.016 [2024-10-11T09:41:30.648Z] =================================================================================================================== 00:07:46.016 [2024-10-11T09:41:30.648Z] Total : 25190.33 98.40 0.00 0.00 0.00 0.00 0.00 00:07:46.016 00:07:46.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.957 Nvme0n1 : 4.00 25098.75 98.04 0.00 0.00 0.00 0.00 0.00 00:07:46.957 [2024-10-11T09:41:31.589Z] =================================================================================================================== 00:07:46.957 [2024-10-11T09:41:31.589Z] Total : 25098.75 98.04 0.00 0.00 0.00 0.00 0.00 00:07:46.957 00:07:47.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.896 Nvme0n1 : 5.00 25053.40 97.86 0.00 0.00 0.00 0.00 0.00 00:07:47.896 [2024-10-11T09:41:32.528Z] =================================================================================================================== 00:07:47.896 [2024-10-11T09:41:32.528Z] Total : 25053.40 97.86 0.00 0.00 0.00 0.00 0.00 00:07:47.896 00:07:48.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.837 Nvme0n1 : 6.00 25023.17 97.75 0.00 0.00 0.00 0.00 0.00 00:07:48.837 [2024-10-11T09:41:33.469Z] =================================================================================================================== 00:07:48.837 [2024-10-11T09:41:33.469Z] Total : 25023.17 97.75 0.00 0.00 0.00 0.00 0.00 00:07:48.837 00:07:49.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.777 Nvme0n1 : 7.00 25010.71 97.70 0.00 0.00 0.00 0.00 0.00 00:07:49.777 [2024-10-11T09:41:34.409Z] =================================================================================================================== 00:07:49.777 [2024-10-11T09:41:34.409Z] Total : 25010.71 97.70 0.00 0.00 0.00 0.00 0.00 00:07:49.777 00:07:50.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.718 Nvme0n1 : 8.00 25002.38 97.67 0.00 0.00 0.00 0.00 0.00 00:07:50.718 [2024-10-11T09:41:35.350Z] =================================================================================================================== 00:07:50.718 [2024-10-11T09:41:35.350Z] Total : 25002.38 97.67 0.00 0.00 0.00 0.00 0.00 00:07:50.718 00:07:52.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.100 Nvme0n1 : 9.00 24997.67 97.65 0.00 0.00 0.00 0.00 0.00 00:07:52.100 [2024-10-11T09:41:36.732Z] =================================================================================================================== 00:07:52.100 [2024-10-11T09:41:36.732Z] Total : 24997.67 97.65 0.00 0.00 0.00 0.00 0.00 00:07:52.100 00:07:53.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.039 Nvme0n1 : 10.00 24991.50 97.62 0.00 0.00 0.00 0.00 0.00 00:07:53.039 [2024-10-11T09:41:37.671Z] =================================================================================================================== 00:07:53.039 [2024-10-11T09:41:37.671Z] Total : 24991.50 97.62 0.00 0.00 0.00 0.00 0.00 00:07:53.039 00:07:53.039 00:07:53.039 Latency(us) 00:07:53.039 [2024-10-11T09:41:37.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.039 Nvme0n1 : 10.01 24989.99 97.62 0.00 0.00 5118.08 2293.76 9284.27 00:07:53.039 [2024-10-11T09:41:37.671Z] =================================================================================================================== 00:07:53.039 [2024-10-11T09:41:37.671Z] Total : 24989.99 97.62 0.00 0.00 5118.08 2293.76 9284.27 00:07:53.039 { 00:07:53.039 "results": [ 00:07:53.039 { 00:07:53.039 "job": "Nvme0n1", 00:07:53.039 "core_mask": "0x2", 00:07:53.039 "workload": "randwrite", 00:07:53.039 "status": "finished", 00:07:53.039 "queue_depth": 128, 00:07:53.039 "io_size": 4096, 00:07:53.039 "runtime": 10.005087, 00:07:53.039 "iops": 24989.98759331128, 00:07:53.039 "mibps": 97.6171390363722, 00:07:53.039 "io_failed": 0, 00:07:53.039 "io_timeout": 0, 00:07:53.039 "avg_latency_us": 5118.084917228939, 00:07:53.039 "min_latency_us": 2293.76, 00:07:53.039 "max_latency_us": 9284.266666666666 00:07:53.039 } 00:07:53.039 ], 00:07:53.039 "core_count": 1 00:07:53.039 } 00:07:53.039 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 838198 00:07:53.039 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 838198 ']' 00:07:53.039 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 838198 00:07:53.039 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:53.039 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.039 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 838198 00:07:53.040 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:53.040 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:53.040 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 838198' 00:07:53.040 killing process with pid 838198 00:07:53.040 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 838198 00:07:53.040 Received shutdown signal, test time was about 10.000000 seconds 00:07:53.040 00:07:53.040 Latency(us) 00:07:53.040 [2024-10-11T09:41:37.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.040 [2024-10-11T09:41:37.672Z] =================================================================================================================== 00:07:53.040 [2024-10-11T09:41:37.672Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:53.040 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 838198 00:07:53.040 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.300 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:53.561 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:53.561 11:41:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:53.561 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:53.561 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:53.561 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:53.832 [2024-10-11 11:41:38.252229] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:53.832 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:53.832 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:53.833 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:53.833 request: 00:07:53.833 { 00:07:53.833 "uuid": "918b0b36-b7ff-4992-a20d-d47cc0bf5d71", 00:07:53.833 "method": "bdev_lvol_get_lvstores", 00:07:53.833 "req_id": 1 00:07:53.833 } 00:07:53.833 Got JSON-RPC error response 00:07:53.833 response: 00:07:53.833 { 00:07:53.833 "code": -19, 00:07:53.833 "message": "No such device" 00:07:53.833 } 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.097 aio_bdev 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 00d04520-41de-411d-b7f3-0ad3f28aba19 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=00d04520-41de-411d-b7f3-0ad3f28aba19 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:54.097 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:54.098 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:54.358 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 00d04520-41de-411d-b7f3-0ad3f28aba19 -t 2000 00:07:54.358 [ 00:07:54.358 { 00:07:54.358 "name": "00d04520-41de-411d-b7f3-0ad3f28aba19", 00:07:54.358 "aliases": [ 00:07:54.358 "lvs/lvol" 00:07:54.358 ], 00:07:54.358 "product_name": "Logical Volume", 00:07:54.358 "block_size": 4096, 00:07:54.358 "num_blocks": 38912, 00:07:54.358 "uuid": "00d04520-41de-411d-b7f3-0ad3f28aba19", 00:07:54.358 "assigned_rate_limits": { 00:07:54.358 "rw_ios_per_sec": 0, 00:07:54.358 "rw_mbytes_per_sec": 0, 00:07:54.358 "r_mbytes_per_sec": 0, 00:07:54.358 "w_mbytes_per_sec": 0 00:07:54.358 }, 00:07:54.358 "claimed": false, 00:07:54.358 "zoned": false, 00:07:54.358 "supported_io_types": { 00:07:54.358 "read": true, 00:07:54.358 "write": true, 00:07:54.358 "unmap": true, 00:07:54.358 "flush": false, 00:07:54.358 "reset": true, 00:07:54.358 "nvme_admin": false, 00:07:54.358 "nvme_io": false, 00:07:54.358 "nvme_io_md": false, 00:07:54.358 "write_zeroes": true, 00:07:54.358 "zcopy": false, 00:07:54.358 "get_zone_info": false, 00:07:54.358 "zone_management": false, 00:07:54.358 "zone_append": false, 00:07:54.358 "compare": false, 00:07:54.358 "compare_and_write": false, 00:07:54.358 "abort": false, 00:07:54.358 "seek_hole": true, 00:07:54.358 "seek_data": true, 00:07:54.358 "copy": false, 00:07:54.358 "nvme_iov_md": false 00:07:54.358 }, 00:07:54.358 "driver_specific": { 00:07:54.358 "lvol": { 00:07:54.358 "lvol_store_uuid": "918b0b36-b7ff-4992-a20d-d47cc0bf5d71", 00:07:54.358 "base_bdev": "aio_bdev", 00:07:54.358 "thin_provision": false, 00:07:54.358 "num_allocated_clusters": 38, 00:07:54.358 "snapshot": false, 00:07:54.358 "clone": false, 00:07:54.358 "esnap_clone": false 00:07:54.358 } 00:07:54.358 } 00:07:54.358 } 00:07:54.358 ] 00:07:54.358 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:54.358 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:54.358 11:41:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:54.618 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:54.619 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:54.619 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:54.879 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:54.879 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 00d04520-41de-411d-b7f3-0ad3f28aba19 00:07:54.879 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 918b0b36-b7ff-4992-a20d-d47cc0bf5d71 00:07:55.138 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.399 00:07:55.399 real 0m15.858s 00:07:55.399 user 0m15.412s 00:07:55.399 sys 0m1.558s 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:55.399 ************************************ 00:07:55.399 END TEST lvs_grow_clean 00:07:55.399 ************************************ 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.399 ************************************ 00:07:55.399 START TEST lvs_grow_dirty 00:07:55.399 ************************************ 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.399 11:41:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:55.660 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:55.660 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:55.920 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=08f6f0b7-250d-4c18-8def-82c5688fca4f 00:07:55.920 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:07:55.920 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:55.920 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:55.920 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:55.920 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08f6f0b7-250d-4c18-8def-82c5688fca4f lvol 150 00:07:56.180 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dba56590-6212-4a21-8bc3-ccdf101346ce 00:07:56.181 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.181 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:56.440 [2024-10-11 11:41:40.865390] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:56.440 [2024-10-11 11:41:40.865432] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:56.440 true 00:07:56.440 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:07:56.440 11:41:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:56.440 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:56.440 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:56.701 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dba56590-6212-4a21-8bc3-ccdf101346ce 00:07:56.961 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.961 [2024-10-11 11:41:41.515270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.961 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=841473 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 841473 /var/tmp/bdevperf.sock 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 841473 ']' 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.221 11:41:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:57.221 [2024-10-11 11:41:41.755571] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:07:57.221 [2024-10-11 11:41:41.755623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841473 ] 00:07:57.221 [2024-10-11 11:41:41.831523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.482 [2024-10-11 11:41:41.861458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.053 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.053 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:58.053 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:58.313 Nvme0n1 00:07:58.313 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:58.574 [ 00:07:58.574 { 00:07:58.574 "name": "Nvme0n1", 00:07:58.574 "aliases": [ 00:07:58.574 "dba56590-6212-4a21-8bc3-ccdf101346ce" 00:07:58.574 ], 00:07:58.574 "product_name": "NVMe disk", 00:07:58.574 "block_size": 4096, 00:07:58.574 "num_blocks": 38912, 00:07:58.574 "uuid": "dba56590-6212-4a21-8bc3-ccdf101346ce", 00:07:58.574 "numa_id": 0, 00:07:58.574 "assigned_rate_limits": { 00:07:58.574 "rw_ios_per_sec": 0, 00:07:58.574 "rw_mbytes_per_sec": 0, 00:07:58.574 "r_mbytes_per_sec": 0, 00:07:58.574 "w_mbytes_per_sec": 0 00:07:58.574 }, 00:07:58.574 "claimed": false, 00:07:58.574 "zoned": false, 00:07:58.574 "supported_io_types": { 00:07:58.574 "read": true, 00:07:58.574 "write": true, 00:07:58.574 "unmap": true, 00:07:58.574 "flush": true, 00:07:58.574 "reset": true, 00:07:58.574 "nvme_admin": true, 00:07:58.574 "nvme_io": true, 00:07:58.574 "nvme_io_md": false, 00:07:58.574 "write_zeroes": true, 00:07:58.574 "zcopy": false, 00:07:58.574 "get_zone_info": false, 00:07:58.574 "zone_management": false, 00:07:58.574 "zone_append": false, 00:07:58.574 "compare": true, 00:07:58.574 "compare_and_write": true, 00:07:58.574 "abort": true, 00:07:58.574 "seek_hole": false, 00:07:58.574 "seek_data": false, 00:07:58.574 "copy": true, 00:07:58.574 "nvme_iov_md": false 00:07:58.574 }, 00:07:58.574 "memory_domains": [ 00:07:58.574 { 00:07:58.574 "dma_device_id": "system", 00:07:58.574 "dma_device_type": 1 00:07:58.574 } 00:07:58.574 ], 00:07:58.574 "driver_specific": { 00:07:58.574 "nvme": [ 00:07:58.574 { 00:07:58.574 "trid": { 00:07:58.574 "trtype": "TCP", 00:07:58.574 "adrfam": "IPv4", 00:07:58.574 "traddr": "10.0.0.2", 00:07:58.574 "trsvcid": "4420", 00:07:58.574 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:58.574 }, 00:07:58.574 "ctrlr_data": { 00:07:58.574 "cntlid": 1, 00:07:58.574 "vendor_id": "0x8086", 00:07:58.574 "model_number": "SPDK bdev Controller", 00:07:58.574 "serial_number": "SPDK0", 00:07:58.574 "firmware_revision": "25.01", 00:07:58.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.574 "oacs": { 00:07:58.574 "security": 0, 00:07:58.574 "format": 0, 00:07:58.574 "firmware": 0, 00:07:58.574 "ns_manage": 0 00:07:58.574 }, 00:07:58.574 "multi_ctrlr": true, 00:07:58.574 "ana_reporting": false 00:07:58.574 }, 00:07:58.574 "vs": { 00:07:58.574 "nvme_version": "1.3" 00:07:58.574 }, 00:07:58.574 "ns_data": { 00:07:58.574 "id": 1, 00:07:58.574 "can_share": true 00:07:58.574 } 00:07:58.574 } 00:07:58.574 ], 00:07:58.574 "mp_policy": "active_passive" 00:07:58.574 } 00:07:58.574 } 00:07:58.574 ] 00:07:58.574 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=841643 00:07:58.574 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:58.574 11:41:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:58.574 Running I/O for 10 seconds... 00:07:59.517 Latency(us) 00:07:59.517 [2024-10-11T09:41:44.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.518 Nvme0n1 : 1.00 25110.00 98.09 0.00 0.00 0.00 0.00 0.00 00:07:59.518 [2024-10-11T09:41:44.150Z] =================================================================================================================== 00:07:59.518 [2024-10-11T09:41:44.150Z] Total : 25110.00 98.09 0.00 0.00 0.00 0.00 0.00 00:07:59.518 00:08:00.460 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:00.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.460 Nvme0n1 : 2.00 25290.50 98.79 0.00 0.00 0.00 0.00 0.00 00:08:00.460 [2024-10-11T09:41:45.092Z] =================================================================================================================== 00:08:00.460 [2024-10-11T09:41:45.092Z] Total : 25290.50 98.79 0.00 0.00 0.00 0.00 0.00 00:08:00.460 00:08:00.721 true 00:08:00.721 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:00.721 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:00.721 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:00.721 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:00.721 11:41:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 841643 00:08:01.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.664 Nvme0n1 : 3.00 25370.67 99.10 0.00 0.00 0.00 0.00 0.00 00:08:01.664 [2024-10-11T09:41:46.296Z] =================================================================================================================== 00:08:01.664 [2024-10-11T09:41:46.296Z] Total : 25370.67 99.10 0.00 0.00 0.00 0.00 0.00 00:08:01.664 00:08:02.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.607 Nvme0n1 : 4.00 25427.25 99.33 0.00 0.00 0.00 0.00 0.00 00:08:02.607 [2024-10-11T09:41:47.239Z] =================================================================================================================== 00:08:02.607 [2024-10-11T09:41:47.239Z] Total : 25427.25 99.33 0.00 0.00 0.00 0.00 0.00 00:08:02.607 00:08:03.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.550 Nvme0n1 : 5.00 25474.80 99.51 0.00 0.00 0.00 0.00 0.00 00:08:03.550 [2024-10-11T09:41:48.182Z] =================================================================================================================== 00:08:03.550 [2024-10-11T09:41:48.182Z] Total : 25474.80 99.51 0.00 0.00 0.00 0.00 0.00 00:08:03.550 00:08:04.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.492 Nvme0n1 : 6.00 25506.17 99.63 0.00 0.00 0.00 0.00 0.00 00:08:04.492 [2024-10-11T09:41:49.124Z] =================================================================================================================== 00:08:04.492 [2024-10-11T09:41:49.124Z] Total : 25506.17 99.63 0.00 0.00 0.00 0.00 0.00 00:08:04.492 00:08:05.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.874 Nvme0n1 : 7.00 25528.43 99.72 0.00 0.00 0.00 0.00 0.00 00:08:05.874 [2024-10-11T09:41:50.506Z] =================================================================================================================== 00:08:05.874 [2024-10-11T09:41:50.506Z] Total : 25528.43 99.72 0.00 0.00 0.00 0.00 0.00 00:08:05.874 00:08:06.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.815 Nvme0n1 : 8.00 25545.12 99.79 0.00 0.00 0.00 0.00 0.00 00:08:06.815 [2024-10-11T09:41:51.447Z] =================================================================================================================== 00:08:06.815 [2024-10-11T09:41:51.447Z] Total : 25545.12 99.79 0.00 0.00 0.00 0.00 0.00 00:08:06.815 00:08:07.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.756 Nvme0n1 : 9.00 25558.00 99.84 0.00 0.00 0.00 0.00 0.00 00:08:07.756 [2024-10-11T09:41:52.388Z] =================================================================================================================== 00:08:07.756 [2024-10-11T09:41:52.388Z] Total : 25558.00 99.84 0.00 0.00 0.00 0.00 0.00 00:08:07.756 00:08:08.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.697 Nvme0n1 : 10.00 25574.50 99.90 0.00 0.00 0.00 0.00 0.00 00:08:08.697 [2024-10-11T09:41:53.329Z] =================================================================================================================== 00:08:08.697 [2024-10-11T09:41:53.329Z] Total : 25574.50 99.90 0.00 0.00 0.00 0.00 0.00 00:08:08.697 00:08:08.697 00:08:08.697 Latency(us) 00:08:08.697 [2024-10-11T09:41:53.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.697 Nvme0n1 : 10.00 25574.70 99.90 0.00 0.00 5001.83 3058.35 13981.01 00:08:08.697 [2024-10-11T09:41:53.329Z] =================================================================================================================== 00:08:08.697 [2024-10-11T09:41:53.330Z] Total : 25574.70 99.90 0.00 0.00 5001.83 3058.35 13981.01 00:08:08.698 { 00:08:08.698 "results": [ 00:08:08.698 { 00:08:08.698 "job": "Nvme0n1", 00:08:08.698 "core_mask": "0x2", 00:08:08.698 "workload": "randwrite", 00:08:08.698 "status": "finished", 00:08:08.698 "queue_depth": 128, 00:08:08.698 "io_size": 4096, 00:08:08.698 "runtime": 10.004928, 00:08:08.698 "iops": 25574.696789422174, 00:08:08.698 "mibps": 99.90115933368037, 00:08:08.698 "io_failed": 0, 00:08:08.698 "io_timeout": 0, 00:08:08.698 "avg_latency_us": 5001.834979071648, 00:08:08.698 "min_latency_us": 3058.346666666667, 00:08:08.698 "max_latency_us": 13981.013333333334 00:08:08.698 } 00:08:08.698 ], 00:08:08.698 "core_count": 1 00:08:08.698 } 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 841473 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 841473 ']' 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 841473 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 841473 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 841473' 00:08:08.698 killing process with pid 841473 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 841473 00:08:08.698 Received shutdown signal, test time was about 10.000000 seconds 00:08:08.698 00:08:08.698 Latency(us) 00:08:08.698 [2024-10-11T09:41:53.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.698 [2024-10-11T09:41:53.330Z] =================================================================================================================== 00:08:08.698 [2024-10-11T09:41:53.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 841473 00:08:08.698 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.959 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:09.219 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:09.219 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:09.219 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:09.219 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:09.219 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 837608 00:08:09.219 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 837608 00:08:09.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 837608 Killed "${NVMF_APP[@]}" "$@" 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=843975 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 843975 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 843975 ']' 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.480 11:41:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.480 [2024-10-11 11:41:53.927297] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:09.480 [2024-10-11 11:41:53.927352] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.480 [2024-10-11 11:41:54.011037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.480 [2024-10-11 11:41:54.041372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.480 [2024-10-11 11:41:54.041401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.480 [2024-10-11 11:41:54.041407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.480 [2024-10-11 11:41:54.041413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.480 [2024-10-11 11:41:54.041417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.480 [2024-10-11 11:41:54.041850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.420 [2024-10-11 11:41:54.911031] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:10.420 [2024-10-11 11:41:54.911123] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:10.420 [2024-10-11 11:41:54.911145] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dba56590-6212-4a21-8bc3-ccdf101346ce 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=dba56590-6212-4a21-8bc3-ccdf101346ce 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:10.420 11:41:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:10.680 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dba56590-6212-4a21-8bc3-ccdf101346ce -t 2000 00:08:10.680 [ 00:08:10.680 { 00:08:10.680 "name": "dba56590-6212-4a21-8bc3-ccdf101346ce", 00:08:10.680 "aliases": [ 00:08:10.680 "lvs/lvol" 00:08:10.680 ], 00:08:10.680 "product_name": "Logical Volume", 00:08:10.680 "block_size": 4096, 00:08:10.680 "num_blocks": 38912, 00:08:10.680 "uuid": "dba56590-6212-4a21-8bc3-ccdf101346ce", 00:08:10.680 "assigned_rate_limits": { 00:08:10.680 "rw_ios_per_sec": 0, 00:08:10.680 "rw_mbytes_per_sec": 0, 00:08:10.680 "r_mbytes_per_sec": 0, 00:08:10.680 "w_mbytes_per_sec": 0 00:08:10.680 }, 00:08:10.680 "claimed": false, 00:08:10.680 "zoned": false, 00:08:10.680 "supported_io_types": { 00:08:10.680 "read": true, 00:08:10.680 "write": true, 00:08:10.680 "unmap": true, 00:08:10.680 "flush": false, 00:08:10.680 "reset": true, 00:08:10.680 "nvme_admin": false, 00:08:10.680 "nvme_io": false, 00:08:10.680 "nvme_io_md": false, 00:08:10.680 "write_zeroes": true, 00:08:10.680 "zcopy": false, 00:08:10.680 "get_zone_info": false, 00:08:10.680 "zone_management": false, 00:08:10.680 "zone_append": false, 00:08:10.680 "compare": false, 00:08:10.680 "compare_and_write": false, 00:08:10.680 "abort": false, 00:08:10.680 "seek_hole": true, 00:08:10.680 "seek_data": true, 00:08:10.680 "copy": false, 00:08:10.680 "nvme_iov_md": false 00:08:10.680 }, 00:08:10.680 "driver_specific": { 00:08:10.680 "lvol": { 00:08:10.680 "lvol_store_uuid": "08f6f0b7-250d-4c18-8def-82c5688fca4f", 00:08:10.681 "base_bdev": "aio_bdev", 00:08:10.681 "thin_provision": false, 00:08:10.681 "num_allocated_clusters": 38, 00:08:10.681 "snapshot": false, 00:08:10.681 "clone": false, 00:08:10.681 "esnap_clone": false 00:08:10.681 } 00:08:10.681 } 00:08:10.681 } 00:08:10.681 ] 00:08:10.681 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:10.681 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:10.681 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:10.941 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:10.941 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:10.941 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.202 [2024-10-11 11:41:55.759721] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:11.202 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:11.461 request: 00:08:11.461 { 00:08:11.461 "uuid": "08f6f0b7-250d-4c18-8def-82c5688fca4f", 00:08:11.461 "method": "bdev_lvol_get_lvstores", 00:08:11.461 "req_id": 1 00:08:11.461 } 00:08:11.461 Got JSON-RPC error response 00:08:11.461 response: 00:08:11.461 { 00:08:11.461 "code": -19, 00:08:11.461 "message": "No such device" 00:08:11.461 } 00:08:11.461 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:11.461 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.461 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.461 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.461 11:41:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.721 aio_bdev 00:08:11.721 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dba56590-6212-4a21-8bc3-ccdf101346ce 00:08:11.721 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=dba56590-6212-4a21-8bc3-ccdf101346ce 00:08:11.721 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.721 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:11.721 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.721 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.721 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:11.721 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dba56590-6212-4a21-8bc3-ccdf101346ce -t 2000 00:08:11.981 [ 00:08:11.981 { 00:08:11.981 "name": "dba56590-6212-4a21-8bc3-ccdf101346ce", 00:08:11.981 "aliases": [ 00:08:11.981 "lvs/lvol" 00:08:11.981 ], 00:08:11.981 "product_name": "Logical Volume", 00:08:11.981 "block_size": 4096, 00:08:11.981 "num_blocks": 38912, 00:08:11.981 "uuid": "dba56590-6212-4a21-8bc3-ccdf101346ce", 00:08:11.981 "assigned_rate_limits": { 00:08:11.981 "rw_ios_per_sec": 0, 00:08:11.981 "rw_mbytes_per_sec": 0, 00:08:11.981 "r_mbytes_per_sec": 0, 00:08:11.981 "w_mbytes_per_sec": 0 00:08:11.981 }, 00:08:11.981 "claimed": false, 00:08:11.981 "zoned": false, 00:08:11.981 "supported_io_types": { 00:08:11.981 "read": true, 00:08:11.981 "write": true, 00:08:11.981 "unmap": true, 00:08:11.981 "flush": false, 00:08:11.981 "reset": true, 00:08:11.981 "nvme_admin": false, 00:08:11.981 "nvme_io": false, 00:08:11.981 "nvme_io_md": false, 00:08:11.981 "write_zeroes": true, 00:08:11.981 "zcopy": false, 00:08:11.981 "get_zone_info": false, 00:08:11.981 "zone_management": false, 00:08:11.981 "zone_append": false, 00:08:11.981 "compare": false, 00:08:11.981 "compare_and_write": false, 00:08:11.981 "abort": false, 00:08:11.981 "seek_hole": true, 00:08:11.981 "seek_data": true, 00:08:11.981 "copy": false, 00:08:11.981 "nvme_iov_md": false 00:08:11.981 }, 00:08:11.981 "driver_specific": { 00:08:11.981 "lvol": { 00:08:11.981 "lvol_store_uuid": "08f6f0b7-250d-4c18-8def-82c5688fca4f", 00:08:11.981 "base_bdev": "aio_bdev", 00:08:11.981 "thin_provision": false, 00:08:11.981 "num_allocated_clusters": 38, 00:08:11.981 "snapshot": false, 00:08:11.981 "clone": false, 00:08:11.981 "esnap_clone": false 00:08:11.981 } 00:08:11.981 } 00:08:11.981 } 00:08:11.981 ] 00:08:11.981 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:11.981 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:11.981 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:12.241 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:12.241 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:12.241 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:12.241 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:12.241 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dba56590-6212-4a21-8bc3-ccdf101346ce 00:08:12.501 11:41:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08f6f0b7-250d-4c18-8def-82c5688fca4f 00:08:12.761 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.761 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.022 00:08:13.022 real 0m17.453s 00:08:13.022 user 0m45.510s 00:08:13.022 sys 0m3.118s 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:13.022 ************************************ 00:08:13.022 END TEST lvs_grow_dirty 00:08:13.022 ************************************ 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:13.022 nvmf_trace.0 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.022 rmmod nvme_tcp 00:08:13.022 rmmod nvme_fabrics 00:08:13.022 rmmod nvme_keyring 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 843975 ']' 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 843975 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 843975 ']' 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 843975 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 843975 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 843975' 00:08:13.022 killing process with pid 843975 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 843975 00:08:13.022 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 843975 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.283 11:41:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.198 11:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.198 00:08:15.198 real 0m44.575s 00:08:15.198 user 1m7.293s 00:08:15.198 sys 0m10.708s 00:08:15.198 11:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.198 11:41:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.198 ************************************ 00:08:15.198 END TEST nvmf_lvs_grow 00:08:15.198 ************************************ 00:08:15.459 11:41:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:15.459 11:41:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:15.459 11:41:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.459 11:41:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.459 ************************************ 00:08:15.459 START TEST nvmf_bdev_io_wait 00:08:15.459 ************************************ 00:08:15.459 11:41:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:15.459 * Looking for test storage... 00:08:15.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.459 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.721 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:15.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.722 --rc genhtml_branch_coverage=1 00:08:15.722 --rc genhtml_function_coverage=1 00:08:15.722 --rc genhtml_legend=1 00:08:15.722 --rc geninfo_all_blocks=1 00:08:15.722 --rc geninfo_unexecuted_blocks=1 00:08:15.722 00:08:15.722 ' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:15.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.722 --rc genhtml_branch_coverage=1 00:08:15.722 --rc genhtml_function_coverage=1 00:08:15.722 --rc genhtml_legend=1 00:08:15.722 --rc geninfo_all_blocks=1 00:08:15.722 --rc geninfo_unexecuted_blocks=1 00:08:15.722 00:08:15.722 ' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:15.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.722 --rc genhtml_branch_coverage=1 00:08:15.722 --rc genhtml_function_coverage=1 00:08:15.722 --rc genhtml_legend=1 00:08:15.722 --rc geninfo_all_blocks=1 00:08:15.722 --rc geninfo_unexecuted_blocks=1 00:08:15.722 00:08:15.722 ' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:15.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.722 --rc genhtml_branch_coverage=1 00:08:15.722 --rc genhtml_function_coverage=1 00:08:15.722 --rc genhtml_legend=1 00:08:15.722 --rc geninfo_all_blocks=1 00:08:15.722 --rc geninfo_unexecuted_blocks=1 00:08:15.722 00:08:15.722 ' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.722 11:42:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:23.870 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:23.870 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:23.870 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:23.870 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.870 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:08:23.871 00:08:23.871 --- 10.0.0.2 ping statistics --- 00:08:23.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.871 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:08:23.871 00:08:23.871 --- 10.0.0.1 ping statistics --- 00:08:23.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.871 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=849134 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 849134 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 849134 ']' 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.871 11:42:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.871 [2024-10-11 11:42:07.726555] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:23.871 [2024-10-11 11:42:07.726622] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.871 [2024-10-11 11:42:07.814514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.871 [2024-10-11 11:42:07.871314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.871 [2024-10-11 11:42:07.871368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.871 [2024-10-11 11:42:07.871377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.871 [2024-10-11 11:42:07.871385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.871 [2024-10-11 11:42:07.871392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.871 [2024-10-11 11:42:07.873464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.871 [2024-10-11 11:42:07.873623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.871 [2024-10-11 11:42:07.873783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.871 [2024-10-11 11:42:07.873783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.133 [2024-10-11 11:42:08.669351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.133 Malloc0 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:24.133 [2024-10-11 11:42:08.734702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=849197 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:24.133 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=849201 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:24.134 { 00:08:24.134 "params": { 00:08:24.134 "name": "Nvme$subsystem", 00:08:24.134 "trtype": "$TEST_TRANSPORT", 00:08:24.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.134 "adrfam": "ipv4", 00:08:24.134 "trsvcid": "$NVMF_PORT", 00:08:24.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.134 "hdgst": ${hdgst:-false}, 00:08:24.134 "ddgst": ${ddgst:-false} 00:08:24.134 }, 00:08:24.134 "method": "bdev_nvme_attach_controller" 00:08:24.134 } 00:08:24.134 EOF 00:08:24.134 )") 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=849203 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:24.134 { 00:08:24.134 "params": { 00:08:24.134 "name": "Nvme$subsystem", 00:08:24.134 "trtype": "$TEST_TRANSPORT", 00:08:24.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.134 "adrfam": "ipv4", 00:08:24.134 "trsvcid": "$NVMF_PORT", 00:08:24.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.134 "hdgst": ${hdgst:-false}, 00:08:24.134 "ddgst": ${ddgst:-false} 00:08:24.134 }, 00:08:24.134 "method": "bdev_nvme_attach_controller" 00:08:24.134 } 00:08:24.134 EOF 00:08:24.134 )") 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=849206 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:24.134 { 00:08:24.134 "params": { 00:08:24.134 "name": "Nvme$subsystem", 00:08:24.134 "trtype": "$TEST_TRANSPORT", 00:08:24.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.134 "adrfam": "ipv4", 00:08:24.134 "trsvcid": "$NVMF_PORT", 00:08:24.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.134 "hdgst": ${hdgst:-false}, 00:08:24.134 "ddgst": ${ddgst:-false} 00:08:24.134 }, 00:08:24.134 "method": "bdev_nvme_attach_controller" 00:08:24.134 } 00:08:24.134 EOF 00:08:24.134 )") 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:24.134 { 00:08:24.134 "params": { 00:08:24.134 "name": "Nvme$subsystem", 00:08:24.134 "trtype": "$TEST_TRANSPORT", 00:08:24.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.134 "adrfam": "ipv4", 00:08:24.134 "trsvcid": "$NVMF_PORT", 00:08:24.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.134 "hdgst": ${hdgst:-false}, 00:08:24.134 "ddgst": ${ddgst:-false} 00:08:24.134 }, 00:08:24.134 "method": "bdev_nvme_attach_controller" 00:08:24.134 } 00:08:24.134 EOF 00:08:24.134 )") 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 849197 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:24.134 "params": { 00:08:24.134 "name": "Nvme1", 00:08:24.134 "trtype": "tcp", 00:08:24.134 "traddr": "10.0.0.2", 00:08:24.134 "adrfam": "ipv4", 00:08:24.134 "trsvcid": "4420", 00:08:24.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.134 "hdgst": false, 00:08:24.134 "ddgst": false 00:08:24.134 }, 00:08:24.134 "method": "bdev_nvme_attach_controller" 00:08:24.134 }' 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:24.134 "params": { 00:08:24.134 "name": "Nvme1", 00:08:24.134 "trtype": "tcp", 00:08:24.134 "traddr": "10.0.0.2", 00:08:24.134 "adrfam": "ipv4", 00:08:24.134 "trsvcid": "4420", 00:08:24.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.134 "hdgst": false, 00:08:24.134 "ddgst": false 00:08:24.134 }, 00:08:24.134 "method": "bdev_nvme_attach_controller" 00:08:24.134 }' 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:24.134 "params": { 00:08:24.134 "name": "Nvme1", 00:08:24.134 "trtype": "tcp", 00:08:24.134 "traddr": "10.0.0.2", 00:08:24.134 "adrfam": "ipv4", 00:08:24.134 "trsvcid": "4420", 00:08:24.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.134 "hdgst": false, 00:08:24.134 "ddgst": false 00:08:24.134 }, 00:08:24.134 "method": "bdev_nvme_attach_controller" 00:08:24.134 }' 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:24.134 11:42:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:24.134 "params": { 00:08:24.134 "name": "Nvme1", 00:08:24.134 "trtype": "tcp", 00:08:24.134 "traddr": "10.0.0.2", 00:08:24.134 "adrfam": "ipv4", 00:08:24.134 "trsvcid": "4420", 00:08:24.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.134 "hdgst": false, 00:08:24.134 "ddgst": false 00:08:24.134 }, 00:08:24.134 "method": "bdev_nvme_attach_controller" 00:08:24.134 }' 00:08:24.396 [2024-10-11 11:42:08.790877] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:24.396 [2024-10-11 11:42:08.790950] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:24.396 [2024-10-11 11:42:08.796243] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:24.396 [2024-10-11 11:42:08.796306] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:24.396 [2024-10-11 11:42:08.797464] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:24.396 [2024-10-11 11:42:08.797527] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:24.396 [2024-10-11 11:42:08.800010] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:24.396 [2024-10-11 11:42:08.800071] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:24.396 [2024-10-11 11:42:08.993071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.657 [2024-10-11 11:42:09.032507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:24.657 [2024-10-11 11:42:09.086069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.657 [2024-10-11 11:42:09.127011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:24.657 [2024-10-11 11:42:09.180452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.657 [2024-10-11 11:42:09.222409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:24.657 [2024-10-11 11:42:09.233535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.657 [2024-10-11 11:42:09.272702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:24.918 Running I/O for 1 seconds... 00:08:24.918 Running I/O for 1 seconds... 00:08:24.918 Running I/O for 1 seconds... 00:08:25.179 Running I/O for 1 seconds... 00:08:25.750 187112.00 IOPS, 730.91 MiB/s 00:08:25.750 Latency(us) 00:08:25.750 [2024-10-11T09:42:10.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.750 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:25.750 Nvme1n1 : 1.00 186739.24 729.45 0.00 0.00 681.70 303.79 1979.73 00:08:25.750 [2024-10-11T09:42:10.382Z] =================================================================================================================== 00:08:25.750 [2024-10-11T09:42:10.382Z] Total : 186739.24 729.45 0.00 0.00 681.70 303.79 1979.73 00:08:26.011 7762.00 IOPS, 30.32 MiB/s 00:08:26.011 Latency(us) 00:08:26.011 [2024-10-11T09:42:10.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.011 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:26.011 Nvme1n1 : 1.02 7759.62 30.31 0.00 0.00 16364.70 5406.72 25995.95 00:08:26.011 [2024-10-11T09:42:10.643Z] =================================================================================================================== 00:08:26.011 [2024-10-11T09:42:10.643Z] Total : 7759.62 30.31 0.00 0.00 16364.70 5406.72 25995.95 00:08:26.011 10723.00 IOPS, 41.89 MiB/s 00:08:26.011 Latency(us) 00:08:26.011 [2024-10-11T09:42:10.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.011 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:26.011 Nvme1n1 : 1.01 10782.76 42.12 0.00 0.00 11824.11 6062.08 22500.69 00:08:26.011 [2024-10-11T09:42:10.643Z] =================================================================================================================== 00:08:26.012 [2024-10-11T09:42:10.644Z] Total : 10782.76 42.12 0.00 0.00 11824.11 6062.08 22500.69 00:08:26.012 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 849201 00:08:26.012 7618.00 IOPS, 29.76 MiB/s 00:08:26.012 Latency(us) 00:08:26.012 [2024-10-11T09:42:10.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.012 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:26.012 Nvme1n1 : 1.01 7709.85 30.12 0.00 0.00 16553.23 3932.16 36918.61 00:08:26.012 [2024-10-11T09:42:10.644Z] =================================================================================================================== 00:08:26.012 [2024-10-11T09:42:10.644Z] Total : 7709.85 30.12 0.00 0.00 16553.23 3932.16 36918.61 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 849203 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 849206 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:26.273 rmmod nvme_tcp 00:08:26.273 rmmod nvme_fabrics 00:08:26.273 rmmod nvme_keyring 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 849134 ']' 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 849134 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 849134 ']' 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 849134 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 849134 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 849134' 00:08:26.273 killing process with pid 849134 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 849134 00:08:26.273 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 849134 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.535 11:42:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.446 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:28.446 00:08:28.446 real 0m13.129s 00:08:28.446 user 0m19.621s 00:08:28.446 sys 0m7.511s 00:08:28.446 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.446 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.446 ************************************ 00:08:28.446 END TEST nvmf_bdev_io_wait 00:08:28.446 ************************************ 00:08:28.446 11:42:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:28.446 11:42:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.446 11:42:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.446 11:42:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.707 ************************************ 00:08:28.707 START TEST nvmf_queue_depth 00:08:28.707 ************************************ 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:28.707 * Looking for test storage... 00:08:28.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.707 --rc genhtml_branch_coverage=1 00:08:28.707 --rc genhtml_function_coverage=1 00:08:28.707 --rc genhtml_legend=1 00:08:28.707 --rc geninfo_all_blocks=1 00:08:28.707 --rc geninfo_unexecuted_blocks=1 00:08:28.707 00:08:28.707 ' 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.707 --rc genhtml_branch_coverage=1 00:08:28.707 --rc genhtml_function_coverage=1 00:08:28.707 --rc genhtml_legend=1 00:08:28.707 --rc geninfo_all_blocks=1 00:08:28.707 --rc geninfo_unexecuted_blocks=1 00:08:28.707 00:08:28.707 ' 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.707 --rc genhtml_branch_coverage=1 00:08:28.707 --rc genhtml_function_coverage=1 00:08:28.707 --rc genhtml_legend=1 00:08:28.707 --rc geninfo_all_blocks=1 00:08:28.707 --rc geninfo_unexecuted_blocks=1 00:08:28.707 00:08:28.707 ' 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.707 --rc genhtml_branch_coverage=1 00:08:28.707 --rc genhtml_function_coverage=1 00:08:28.707 --rc genhtml_legend=1 00:08:28.707 --rc geninfo_all_blocks=1 00:08:28.707 --rc geninfo_unexecuted_blocks=1 00:08:28.707 00:08:28.707 ' 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.707 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.708 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.708 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.708 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.708 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.969 11:42:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:37.107 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:37.107 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:37.107 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:37.107 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:37.107 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:37.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:08:37.108 00:08:37.108 --- 10.0.0.2 ping statistics --- 00:08:37.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.108 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:08:37.108 00:08:37.108 --- 10.0.0.1 ping statistics --- 00:08:37.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.108 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=854362 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 854362 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 854362 ']' 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.108 11:42:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 [2024-10-11 11:42:20.804382] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:37.108 [2024-10-11 11:42:20.804444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.108 [2024-10-11 11:42:20.893146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.108 [2024-10-11 11:42:20.944291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.108 [2024-10-11 11:42:20.944338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.108 [2024-10-11 11:42:20.944346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.108 [2024-10-11 11:42:20.944353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.108 [2024-10-11 11:42:20.944359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.108 [2024-10-11 11:42:20.945110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 [2024-10-11 11:42:21.672404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 Malloc0 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.108 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.108 [2024-10-11 11:42:21.733479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=854708 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 854708 /var/tmp/bdevperf.sock 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 854708 ']' 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:37.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.370 11:42:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.370 [2024-10-11 11:42:21.801648] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:08:37.370 [2024-10-11 11:42:21.801724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854708 ] 00:08:37.370 [2024-10-11 11:42:21.884672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.370 [2024-10-11 11:42:21.937721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.312 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:38.312 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:38.312 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:38.312 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.312 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.312 NVMe0n1 00:08:38.312 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.312 11:42:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:38.572 Running I/O for 10 seconds... 00:08:40.456 8534.00 IOPS, 33.34 MiB/s [2024-10-11T09:42:26.028Z] 8973.50 IOPS, 35.05 MiB/s [2024-10-11T09:42:26.969Z] 9888.00 IOPS, 38.62 MiB/s [2024-10-11T09:42:28.355Z] 10477.75 IOPS, 40.93 MiB/s [2024-10-11T09:42:29.296Z] 11055.20 IOPS, 43.18 MiB/s [2024-10-11T09:42:30.237Z] 11438.50 IOPS, 44.68 MiB/s [2024-10-11T09:42:31.178Z] 11821.71 IOPS, 46.18 MiB/s [2024-10-11T09:42:32.118Z] 12065.50 IOPS, 47.13 MiB/s [2024-10-11T09:42:33.059Z] 12281.56 IOPS, 47.97 MiB/s [2024-10-11T09:42:33.059Z] 12389.20 IOPS, 48.40 MiB/s 00:08:48.427 Latency(us) 00:08:48.427 [2024-10-11T09:42:33.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.427 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:48.427 Verification LBA range: start 0x0 length 0x4000 00:08:48.427 NVMe0n1 : 10.04 12435.24 48.58 0.00 0.00 82081.31 9939.63 76895.57 00:08:48.427 [2024-10-11T09:42:33.059Z] =================================================================================================================== 00:08:48.427 [2024-10-11T09:42:33.059Z] Total : 12435.24 48.58 0.00 0.00 82081.31 9939.63 76895.57 00:08:48.427 { 00:08:48.427 "results": [ 00:08:48.427 { 00:08:48.427 "job": "NVMe0n1", 00:08:48.427 "core_mask": "0x1", 00:08:48.427 "workload": "verify", 00:08:48.427 "status": "finished", 00:08:48.427 "verify_range": { 00:08:48.427 "start": 0, 00:08:48.427 "length": 16384 00:08:48.427 }, 00:08:48.427 "queue_depth": 1024, 00:08:48.427 "io_size": 4096, 00:08:48.427 "runtime": 10.044756, 00:08:48.427 "iops": 12435.244818291256, 00:08:48.427 "mibps": 48.57517507145022, 00:08:48.427 "io_failed": 0, 00:08:48.427 "io_timeout": 0, 00:08:48.427 "avg_latency_us": 82081.30759299437, 00:08:48.427 "min_latency_us": 9939.626666666667, 00:08:48.427 "max_latency_us": 76895.57333333333 00:08:48.427 } 00:08:48.427 ], 00:08:48.427 "core_count": 1 00:08:48.427 } 00:08:48.427 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 854708 00:08:48.427 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 854708 ']' 00:08:48.427 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 854708 00:08:48.427 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:48.427 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.427 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854708 00:08:48.688 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.688 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.688 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854708' 00:08:48.688 killing process with pid 854708 00:08:48.688 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 854708 00:08:48.688 Received shutdown signal, test time was about 10.000000 seconds 00:08:48.688 00:08:48.688 Latency(us) 00:08:48.688 [2024-10-11T09:42:33.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.688 [2024-10-11T09:42:33.321Z] =================================================================================================================== 00:08:48.689 [2024-10-11T09:42:33.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 854708 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.689 rmmod nvme_tcp 00:08:48.689 rmmod nvme_fabrics 00:08:48.689 rmmod nvme_keyring 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 854362 ']' 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 854362 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 854362 ']' 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 854362 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.689 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854362 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854362' 00:08:48.949 killing process with pid 854362 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 854362 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 854362 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.949 11:42:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.495 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.495 00:08:51.495 real 0m22.423s 00:08:51.495 user 0m25.776s 00:08:51.495 sys 0m7.049s 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:51.496 ************************************ 00:08:51.496 END TEST nvmf_queue_depth 00:08:51.496 ************************************ 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.496 ************************************ 00:08:51.496 START TEST nvmf_target_multipath 00:08:51.496 ************************************ 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:51.496 * Looking for test storage... 00:08:51.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.496 --rc genhtml_branch_coverage=1 00:08:51.496 --rc genhtml_function_coverage=1 00:08:51.496 --rc genhtml_legend=1 00:08:51.496 --rc geninfo_all_blocks=1 00:08:51.496 --rc geninfo_unexecuted_blocks=1 00:08:51.496 00:08:51.496 ' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.496 --rc genhtml_branch_coverage=1 00:08:51.496 --rc genhtml_function_coverage=1 00:08:51.496 --rc genhtml_legend=1 00:08:51.496 --rc geninfo_all_blocks=1 00:08:51.496 --rc geninfo_unexecuted_blocks=1 00:08:51.496 00:08:51.496 ' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.496 --rc genhtml_branch_coverage=1 00:08:51.496 --rc genhtml_function_coverage=1 00:08:51.496 --rc genhtml_legend=1 00:08:51.496 --rc geninfo_all_blocks=1 00:08:51.496 --rc geninfo_unexecuted_blocks=1 00:08:51.496 00:08:51.496 ' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.496 --rc genhtml_branch_coverage=1 00:08:51.496 --rc genhtml_function_coverage=1 00:08:51.496 --rc genhtml_legend=1 00:08:51.496 --rc geninfo_all_blocks=1 00:08:51.496 --rc geninfo_unexecuted_blocks=1 00:08:51.496 00:08:51.496 ' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.496 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.497 11:42:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:59.639 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.639 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:59.640 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:59.640 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:59.640 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.640 11:42:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:08:59.640 00:08:59.640 --- 10.0.0.2 ping statistics --- 00:08:59.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.640 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:08:59.640 00:08:59.640 --- 10.0.0.1 ping statistics --- 00:08:59.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.640 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:59.640 only one NIC for nvmf test 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.640 rmmod nvme_tcp 00:08:59.640 rmmod nvme_fabrics 00:08:59.640 rmmod nvme_keyring 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.640 11:42:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:01.026 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.027 00:09:01.027 real 0m9.899s 00:09:01.027 user 0m2.091s 00:09:01.027 sys 0m5.758s 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:01.027 ************************************ 00:09:01.027 END TEST nvmf_target_multipath 00:09:01.027 ************************************ 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.027 ************************************ 00:09:01.027 START TEST nvmf_zcopy 00:09:01.027 ************************************ 00:09:01.027 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:01.289 * Looking for test storage... 00:09:01.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.289 --rc genhtml_branch_coverage=1 00:09:01.289 --rc genhtml_function_coverage=1 00:09:01.289 --rc genhtml_legend=1 00:09:01.289 --rc geninfo_all_blocks=1 00:09:01.289 --rc geninfo_unexecuted_blocks=1 00:09:01.289 00:09:01.289 ' 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.289 --rc genhtml_branch_coverage=1 00:09:01.289 --rc genhtml_function_coverage=1 00:09:01.289 --rc genhtml_legend=1 00:09:01.289 --rc geninfo_all_blocks=1 00:09:01.289 --rc geninfo_unexecuted_blocks=1 00:09:01.289 00:09:01.289 ' 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.289 --rc genhtml_branch_coverage=1 00:09:01.289 --rc genhtml_function_coverage=1 00:09:01.289 --rc genhtml_legend=1 00:09:01.289 --rc geninfo_all_blocks=1 00:09:01.289 --rc geninfo_unexecuted_blocks=1 00:09:01.289 00:09:01.289 ' 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.289 --rc genhtml_branch_coverage=1 00:09:01.289 --rc genhtml_function_coverage=1 00:09:01.289 --rc genhtml_legend=1 00:09:01.289 --rc geninfo_all_blocks=1 00:09:01.289 --rc geninfo_unexecuted_blocks=1 00:09:01.289 00:09:01.289 ' 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.289 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.290 11:42:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:09.425 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:09.425 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:09.425 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:09.425 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:09.425 11:42:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.425 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.425 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.425 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.425 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:09.425 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.425 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.425 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.425 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:09.425 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:09.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:09:09.425 00:09:09.425 --- 10.0.0.2 ping statistics --- 00:09:09.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.426 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:09:09.426 00:09:09.426 --- 10.0.0.1 ping statistics --- 00:09:09.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.426 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=865405 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 865405 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 865405 ']' 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.426 11:42:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.426 [2024-10-11 11:42:53.394042] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:09.426 [2024-10-11 11:42:53.394110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.426 [2024-10-11 11:42:53.481385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.426 [2024-10-11 11:42:53.531278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.426 [2024-10-11 11:42:53.531331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.426 [2024-10-11 11:42:53.531340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.426 [2024-10-11 11:42:53.531347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.426 [2024-10-11 11:42:53.531353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.426 [2024-10-11 11:42:53.532129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.687 [2024-10-11 11:42:54.269566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.687 [2024-10-11 11:42:54.293893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.687 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.948 malloc0 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:09.948 { 00:09:09.948 "params": { 00:09:09.948 "name": "Nvme$subsystem", 00:09:09.948 "trtype": "$TEST_TRANSPORT", 00:09:09.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.948 "adrfam": "ipv4", 00:09:09.948 "trsvcid": "$NVMF_PORT", 00:09:09.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.948 "hdgst": ${hdgst:-false}, 00:09:09.948 "ddgst": ${ddgst:-false} 00:09:09.948 }, 00:09:09.948 "method": "bdev_nvme_attach_controller" 00:09:09.948 } 00:09:09.948 EOF 00:09:09.948 )") 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:09.948 11:42:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:09.948 "params": { 00:09:09.948 "name": "Nvme1", 00:09:09.948 "trtype": "tcp", 00:09:09.948 "traddr": "10.0.0.2", 00:09:09.948 "adrfam": "ipv4", 00:09:09.948 "trsvcid": "4420", 00:09:09.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.948 "hdgst": false, 00:09:09.948 "ddgst": false 00:09:09.948 }, 00:09:09.948 "method": "bdev_nvme_attach_controller" 00:09:09.948 }' 00:09:09.948 [2024-10-11 11:42:54.394772] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:09.948 [2024-10-11 11:42:54.394838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid865602 ] 00:09:09.948 [2024-10-11 11:42:54.466320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.948 [2024-10-11 11:42:54.519661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.209 Running I/O for 10 seconds... 00:09:12.532 7495.00 IOPS, 58.55 MiB/s [2024-10-11T09:42:58.104Z] 8614.00 IOPS, 67.30 MiB/s [2024-10-11T09:42:59.043Z] 8998.67 IOPS, 70.30 MiB/s [2024-10-11T09:42:59.983Z] 9188.75 IOPS, 71.79 MiB/s [2024-10-11T09:43:00.923Z] 9307.80 IOPS, 72.72 MiB/s [2024-10-11T09:43:01.864Z] 9370.83 IOPS, 73.21 MiB/s [2024-10-11T09:43:03.246Z] 9423.57 IOPS, 73.62 MiB/s [2024-10-11T09:43:04.185Z] 9465.25 IOPS, 73.95 MiB/s [2024-10-11T09:43:05.126Z] 9496.00 IOPS, 74.19 MiB/s [2024-10-11T09:43:05.126Z] 9522.00 IOPS, 74.39 MiB/s 00:09:20.494 Latency(us) 00:09:20.494 [2024-10-11T09:43:05.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.494 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:20.494 Verification LBA range: start 0x0 length 0x1000 00:09:20.494 Nvme1n1 : 10.01 9522.47 74.39 0.00 0.00 13393.88 1645.23 27634.35 00:09:20.494 [2024-10-11T09:43:05.127Z] =================================================================================================================== 00:09:20.495 [2024-10-11T09:43:05.127Z] Total : 9522.47 74.39 0.00 0.00 13393.88 1645.23 27634.35 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=867772 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:20.495 { 00:09:20.495 "params": { 00:09:20.495 "name": "Nvme$subsystem", 00:09:20.495 "trtype": "$TEST_TRANSPORT", 00:09:20.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:20.495 "adrfam": "ipv4", 00:09:20.495 "trsvcid": "$NVMF_PORT", 00:09:20.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:20.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:20.495 "hdgst": ${hdgst:-false}, 00:09:20.495 "ddgst": ${ddgst:-false} 00:09:20.495 }, 00:09:20.495 "method": "bdev_nvme_attach_controller" 00:09:20.495 } 00:09:20.495 EOF 00:09:20.495 )") 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:20.495 [2024-10-11 11:43:04.955631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:04.955663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:20.495 11:43:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:20.495 "params": { 00:09:20.495 "name": "Nvme1", 00:09:20.495 "trtype": "tcp", 00:09:20.495 "traddr": "10.0.0.2", 00:09:20.495 "adrfam": "ipv4", 00:09:20.495 "trsvcid": "4420", 00:09:20.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:20.495 "hdgst": false, 00:09:20.495 "ddgst": false 00:09:20.495 }, 00:09:20.495 "method": "bdev_nvme_attach_controller" 00:09:20.495 }' 00:09:20.495 [2024-10-11 11:43:04.967621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:04.967630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:04.979648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:04.979655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:04.991680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:04.991687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.003712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.003720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.010257] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:20.495 [2024-10-11 11:43:05.010305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867772 ] 00:09:20.495 [2024-10-11 11:43:05.015739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.015746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.027769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.027776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.039802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.039809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.051832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.051839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.063861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.063868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.075891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.075898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.084927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.495 [2024-10-11 11:43:05.087922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.087929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.099954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.099964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.111986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.111995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.495 [2024-10-11 11:43:05.114186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.495 [2024-10-11 11:43:05.124036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.495 [2024-10-11 11:43:05.124049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.136057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.136070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.148087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.148097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.160114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.160122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.172145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.172151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.184175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.184182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.196222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.196238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.208243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.208252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.220270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.220279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.232301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.232307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.244329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.244336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.256362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.256370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.268394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.268404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.280424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.280433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.292460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.292474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 Running I/O for 5 seconds... 00:09:20.756 [2024-10-11 11:43:05.304489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.304495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.320385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.320403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.333896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.333912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.347067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.347087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.359832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.359848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.373472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.373489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.756 [2024-10-11 11:43:05.386933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.756 [2024-10-11 11:43:05.386948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.016 [2024-10-11 11:43:05.399936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.016 [2024-10-11 11:43:05.399951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.016 [2024-10-11 11:43:05.412927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.016 [2024-10-11 11:43:05.412942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.016 [2024-10-11 11:43:05.426230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.016 [2024-10-11 11:43:05.426244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.016 [2024-10-11 11:43:05.439568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.016 [2024-10-11 11:43:05.439583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.016 [2024-10-11 11:43:05.452495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.016 [2024-10-11 11:43:05.452510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.016 [2024-10-11 11:43:05.465965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.016 [2024-10-11 11:43:05.465980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.016 [2024-10-11 11:43:05.478935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.016 [2024-10-11 11:43:05.478950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.016 [2024-10-11 11:43:05.492456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.016 [2024-10-11 11:43:05.492471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.016 [2024-10-11 11:43:05.505812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.016 [2024-10-11 11:43:05.505828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.519307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.519322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.532508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.532522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.545650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.545665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.558947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.558962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.572026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.572040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.585438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.585453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.598743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.598757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.612310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.612325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.625241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.625256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.017 [2024-10-11 11:43:05.638163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.017 [2024-10-11 11:43:05.638177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.651140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.651155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.664504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.664518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.678194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.678208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.691350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.691365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.704876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.704891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.718388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.718402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.731650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.731665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.744532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.744547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.757300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.757314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.770525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.770539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.783300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.783315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.796856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.796870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.809265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.809280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.822238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.822253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.835683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.835697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.849119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.849134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.862679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.862694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.875021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.875035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.887416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.887430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.277 [2024-10-11 11:43:05.900548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.277 [2024-10-11 11:43:05.900563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:05.913647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:05.913663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:05.926763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:05.926779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:05.939944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:05.939959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:05.953006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:05.953022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:05.965475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:05.965490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:05.977956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:05.977971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:05.991453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:05.991468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.004535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.004551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.017397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.017412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.030772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.030787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.043404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.043419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.056542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.056558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.070137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.070153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.083054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.083070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.096687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.096702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.109687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.109702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.122622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.122637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.135393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.135410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.148014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.148030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.538 [2024-10-11 11:43:06.160621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.538 [2024-10-11 11:43:06.160637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.174127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.174142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.187197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.187212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.199552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.199567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.212170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.212185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.225444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.225459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.237758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.237773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.251516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.251531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.264070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.264085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.277875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.277889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.290114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.290129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 19102.00 IOPS, 149.23 MiB/s [2024-10-11T09:43:06.431Z] [2024-10-11 11:43:06.303032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.303047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.316371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.316386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.329434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.329457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.343267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.343283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.356707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.356722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.369376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.369391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.382980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.382995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.395225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.395239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.408094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.408109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.799 [2024-10-11 11:43:06.421575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.799 [2024-10-11 11:43:06.421591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.434014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.434030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.446613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.446628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.460112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.460128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.473586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.473601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.487199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.487214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.500949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.500964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.513240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.513255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.526034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.526049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.538724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.538739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.551746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.551761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.564877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.564892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.578026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.578045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.591713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.591728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.604945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.604961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.617564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.617579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.631126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.631141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.644362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.644377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.657564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.657579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.059 [2024-10-11 11:43:06.671158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.059 [2024-10-11 11:43:06.671173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.060 [2024-10-11 11:43:06.684390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.060 [2024-10-11 11:43:06.684405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.697782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.697798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.711039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.711054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.724303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.724318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.737588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.737603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.750579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.750594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.763900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.763914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.777159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.777174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.789726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.789740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.802627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.802641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.815211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.815225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.827766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.827785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.841323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.841338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.854798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.854812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.867923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.867938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.880945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.880960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.894502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.894516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.907854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.907869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.921182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.921197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.934335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.934349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.320 [2024-10-11 11:43:06.947723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.320 [2024-10-11 11:43:06.947738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-11 11:43:06.961040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-11 11:43:06.961055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-11 11:43:06.974519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-11 11:43:06.974534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-11 11:43:06.988136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-11 11:43:06.988152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-11 11:43:07.000710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.580 [2024-10-11 11:43:07.000726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.580 [2024-10-11 11:43:07.013513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.013528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.025940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.025954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.038795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.038810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.051980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.051994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.065233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.065247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.078833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.078852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.092057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.092073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.105160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.105175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.118674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.118689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.131593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.131607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.145065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.145080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.158462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.158477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.171105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.171119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.183594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.183608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.196751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.196765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.581 [2024-10-11 11:43:07.209989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.581 [2024-10-11 11:43:07.210004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.223406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.223422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.236703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.236718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.250218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.250232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.263295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.263310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.277112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.277127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.290274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.290288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.302868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.302883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 19213.50 IOPS, 150.11 MiB/s [2024-10-11T09:43:07.473Z] [2024-10-11 11:43:07.316052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.316067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.329565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.329579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.343075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.343090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.355697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.355712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.367962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.367976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.380865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.380880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.393720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.393734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.407067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.407082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.420349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.420365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.433599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.433613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.447106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.447121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.459607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.459622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.841 [2024-10-11 11:43:07.472449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.841 [2024-10-11 11:43:07.472464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.485381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.485395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.499179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.499194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.511653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.511671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.524959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.524973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.538222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.538237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.551864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.551880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.564754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.564769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.578071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.578085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.591637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.591651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.604940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.604955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.618265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.618280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.631657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.631678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.644913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.644928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.657361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.657376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.669778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.669793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.682517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.682532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.695013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.695027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.707849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.707864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.720663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.720683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.103 [2024-10-11 11:43:07.733013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.103 [2024-10-11 11:43:07.733028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.746413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.746428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.759855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.759870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.773018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.773033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.786380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.786395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.799498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.799513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.812971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.812990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.825756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.825771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.839263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.839279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.851715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.851730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.864803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.864818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.877598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.877613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.890952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.890967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.904281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.904296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.917901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.917916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.931257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.931271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.944144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.944159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.956875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.956890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.970416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.970432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.364 [2024-10-11 11:43:07.983506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.364 [2024-10-11 11:43:07.983522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:07.996354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:07.996369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.009769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.009784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.022445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.022460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.036039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.036054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.049678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.049693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.062928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.062947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.075561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.075576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.088398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.088413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.101754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.101769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.115038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.115053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.128182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.128198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.141830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.141845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.154339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.154354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.167488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.167503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.180340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.180355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.192929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.192943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.205421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.205436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.218759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.218774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.232043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.232058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.625 [2024-10-11 11:43:08.245564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.625 [2024-10-11 11:43:08.245579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.258821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.258836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.272507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.272522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.285228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.285243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.298704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.298720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 19247.67 IOPS, 150.37 MiB/s [2024-10-11T09:43:08.518Z] [2024-10-11 11:43:08.311394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.311413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.324372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.324387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.337707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.337722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.350910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.350924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.364387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.364401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.376719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.376733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.389580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.389595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.402288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.402303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.415564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.415578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.428547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.428561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.441192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.441206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.453924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.453938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.467148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.467162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.480728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.480743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.493375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.493389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.886 [2024-10-11 11:43:08.505628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.886 [2024-10-11 11:43:08.505643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.518306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.518321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.531031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.531046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.543419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.543434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.556006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.556021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.568900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.568915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.581714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.581729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.594923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.594937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.608084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.608099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.621088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.621102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.633994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.634009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.647418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.647432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.660862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.660876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.673886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.673900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.686980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.686994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.699484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.699499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.713016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.713031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.726085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.726099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.739559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.739574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.752895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.752909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.147 [2024-10-11 11:43:08.765928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.147 [2024-10-11 11:43:08.765943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.407 [2024-10-11 11:43:08.779233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.407 [2024-10-11 11:43:08.779248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.407 [2024-10-11 11:43:08.792698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.407 [2024-10-11 11:43:08.792712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.407 [2024-10-11 11:43:08.805160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.407 [2024-10-11 11:43:08.805174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.407 [2024-10-11 11:43:08.818170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.407 [2024-10-11 11:43:08.818185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.407 [2024-10-11 11:43:08.831503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.407 [2024-10-11 11:43:08.831517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.407 [2024-10-11 11:43:08.845239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.407 [2024-10-11 11:43:08.845253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.407 [2024-10-11 11:43:08.858131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.407 [2024-10-11 11:43:08.858146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.871531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.871545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.884943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.884957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.898049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.898063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.911350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.911365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.924912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.924927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.938528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.938544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.952114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.952128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.964953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.964968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.977921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.977936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:08.991010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:08.991025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:09.004631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:09.004646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:09.017916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:09.017930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.408 [2024-10-11 11:43:09.031438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.408 [2024-10-11 11:43:09.031452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.044066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.044081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.057096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.057110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.070084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.070098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.083088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.083102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.096615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.096630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.109937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.109952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.123388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.123403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.136076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.136092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.149427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.149442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.161939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.161954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.174784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.174798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.188294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.188308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.201581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.201597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.214911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.214925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.228145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.228159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.240791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.240806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.254478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.254493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.266748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.266762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.280161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.280176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.669 [2024-10-11 11:43:09.293211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.669 [2024-10-11 11:43:09.293226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.306851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.306867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 19241.25 IOPS, 150.32 MiB/s [2024-10-11T09:43:09.562Z] [2024-10-11 11:43:09.319405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.319420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.332645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.332661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.345569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.345584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.358902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.358917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.372475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.372490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.386196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.386211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.398705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.398719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.411004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.411018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.424613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.424628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.437574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.437589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.450888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.450903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.464492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.464507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.477602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.477617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.491140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.491155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.503675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.503689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.516374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.516389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.529871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.529886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.542721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.542740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.930 [2024-10-11 11:43:09.555697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.930 [2024-10-11 11:43:09.555712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.569249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.569265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.583052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.583067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.595883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.595899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.609365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.609381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.622739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.622754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.635239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.635253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.647783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.647797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.661368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.661383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.674503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.674518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.688053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.688068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.701536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.701551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.714544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.714559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.727565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.727579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.740388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.740402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.753970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.753985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.766530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.766545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.779739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.779754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.793349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.793368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.805773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.805788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.191 [2024-10-11 11:43:09.818614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.191 [2024-10-11 11:43:09.818629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.831992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.832007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.845653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.845673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.858145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.858160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.871027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.871042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.884287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.884302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.897474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.897489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.910807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.910822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.923994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.924009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.937096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.937111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.950214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.950229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.963736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.963750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.977074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.977089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:09.990194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:09.990208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:10.003601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:10.003623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:10.017516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:10.017536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:10.030350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:10.030366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:10.042921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:10.042943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:10.055952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:10.055966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:10.069317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:10.069332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.453 [2024-10-11 11:43:10.082112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.453 [2024-10-11 11:43:10.082127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.095189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.095204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.108722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.108737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.122662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.122685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.136974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.136994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.149479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.149494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.162445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.162460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.175145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.175159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.188127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.188142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.201395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.201409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.215157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.215172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.228861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.228875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.242030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.242044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.255467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.255483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.269087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.269101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.281883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.281899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.714 [2024-10-11 11:43:10.294813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.714 [2024-10-11 11:43:10.294828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.715 [2024-10-11 11:43:10.307692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.715 [2024-10-11 11:43:10.307707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.715 19215.60 IOPS, 150.12 MiB/s 00:09:25.715 Latency(us) 00:09:25.715 [2024-10-11T09:43:10.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.715 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:25.715 Nvme1n1 : 5.00 19225.09 150.20 0.00 0.00 6652.86 2771.63 18022.40 00:09:25.715 [2024-10-11T09:43:10.347Z] =================================================================================================================== 00:09:25.715 [2024-10-11T09:43:10.347Z] Total : 19225.09 150.20 0.00 0.00 6652.86 2771.63 18022.40 00:09:25.715 [2024-10-11 11:43:10.317832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.715 [2024-10-11 11:43:10.317847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.715 [2024-10-11 11:43:10.329858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.715 [2024-10-11 11:43:10.329871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.715 [2024-10-11 11:43:10.341898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.715 [2024-10-11 11:43:10.341911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.975 [2024-10-11 11:43:10.353925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.975 [2024-10-11 11:43:10.353942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.975 [2024-10-11 11:43:10.365952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.975 [2024-10-11 11:43:10.365963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.975 [2024-10-11 11:43:10.377980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.975 [2024-10-11 11:43:10.377991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.975 [2024-10-11 11:43:10.390010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.975 [2024-10-11 11:43:10.390019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.975 [2024-10-11 11:43:10.402043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.975 [2024-10-11 11:43:10.402053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.975 [2024-10-11 11:43:10.414072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.975 [2024-10-11 11:43:10.414080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (867772) - No such process 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 867772 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.975 delay0 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.975 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:25.976 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.976 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.976 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.976 11:43:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:26.236 [2024-10-11 11:43:10.609850] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:34.372 Initializing NVMe Controllers 00:09:34.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:34.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:34.372 Initialization complete. Launching workers. 00:09:34.372 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 242, failed: 32839 00:09:34.372 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32944, failed to submit 137 00:09:34.372 success 32854, unsuccessful 90, failed 0 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.372 rmmod nvme_tcp 00:09:34.372 rmmod nvme_fabrics 00:09:34.372 rmmod nvme_keyring 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 865405 ']' 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 865405 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 865405 ']' 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 865405 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 865405 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 865405' 00:09:34.372 killing process with pid 865405 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 865405 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 865405 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.372 11:43:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.756 11:43:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.756 00:09:35.756 real 0m34.388s 00:09:35.756 user 0m45.092s 00:09:35.756 sys 0m11.979s 00:09:35.756 11:43:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.756 11:43:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.756 ************************************ 00:09:35.756 END TEST nvmf_zcopy 00:09:35.756 ************************************ 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.756 ************************************ 00:09:35.756 START TEST nvmf_nmic 00:09:35.756 ************************************ 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:35.756 * Looking for test storage... 00:09:35.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:35.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.756 --rc genhtml_branch_coverage=1 00:09:35.756 --rc genhtml_function_coverage=1 00:09:35.756 --rc genhtml_legend=1 00:09:35.756 --rc geninfo_all_blocks=1 00:09:35.756 --rc geninfo_unexecuted_blocks=1 00:09:35.756 00:09:35.756 ' 00:09:35.756 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:35.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.756 --rc genhtml_branch_coverage=1 00:09:35.757 --rc genhtml_function_coverage=1 00:09:35.757 --rc genhtml_legend=1 00:09:35.757 --rc geninfo_all_blocks=1 00:09:35.757 --rc geninfo_unexecuted_blocks=1 00:09:35.757 00:09:35.757 ' 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:35.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.757 --rc genhtml_branch_coverage=1 00:09:35.757 --rc genhtml_function_coverage=1 00:09:35.757 --rc genhtml_legend=1 00:09:35.757 --rc geninfo_all_blocks=1 00:09:35.757 --rc geninfo_unexecuted_blocks=1 00:09:35.757 00:09:35.757 ' 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:35.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.757 --rc genhtml_branch_coverage=1 00:09:35.757 --rc genhtml_function_coverage=1 00:09:35.757 --rc genhtml_legend=1 00:09:35.757 --rc geninfo_all_blocks=1 00:09:35.757 --rc geninfo_unexecuted_blocks=1 00:09:35.757 00:09:35.757 ' 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.757 11:43:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:43.893 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:43.893 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:43.893 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:43.893 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:43.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:09:43.893 00:09:43.893 --- 10.0.0.2 ping statistics --- 00:09:43.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.893 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:09:43.893 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:09:43.894 00:09:43.894 --- 10.0.0.1 ping statistics --- 00:09:43.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.894 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=874460 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 874460 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 874460 ']' 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.894 11:43:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:43.894 [2024-10-11 11:43:27.810934] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:09:43.894 [2024-10-11 11:43:27.810999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.894 [2024-10-11 11:43:27.903120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.894 [2024-10-11 11:43:27.958487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.894 [2024-10-11 11:43:27.958545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.894 [2024-10-11 11:43:27.958553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.894 [2024-10-11 11:43:27.958561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.894 [2024-10-11 11:43:27.958567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.894 [2024-10-11 11:43:27.961022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.894 [2024-10-11 11:43:27.961188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.894 [2024-10-11 11:43:27.961393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.894 [2024-10-11 11:43:27.961393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.155 [2024-10-11 11:43:28.696876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.155 Malloc0 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.155 [2024-10-11 11:43:28.775812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:44.155 test case1: single bdev can't be used in multiple subsystems 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.155 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.416 [2024-10-11 11:43:28.811633] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:44.416 [2024-10-11 11:43:28.811672] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:44.416 [2024-10-11 11:43:28.811682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.416 request: 00:09:44.416 { 00:09:44.416 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:44.416 "namespace": { 00:09:44.416 "bdev_name": "Malloc0", 00:09:44.416 "no_auto_visible": false 00:09:44.416 }, 00:09:44.416 "method": "nvmf_subsystem_add_ns", 00:09:44.416 "req_id": 1 00:09:44.416 } 00:09:44.416 Got JSON-RPC error response 00:09:44.416 response: 00:09:44.416 { 00:09:44.416 "code": -32602, 00:09:44.416 "message": "Invalid parameters" 00:09:44.416 } 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:44.416 Adding namespace failed - expected result. 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:44.416 test case2: host connect to nvmf target in multiple paths 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.416 [2024-10-11 11:43:28.823828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.416 11:43:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:45.801 11:43:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:47.712 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:47.712 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:47.712 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.712 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:47.712 11:43:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:49.656 11:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:49.656 11:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:49.656 11:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.656 11:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:49.656 11:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.656 11:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:49.656 11:43:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:49.656 [global] 00:09:49.656 thread=1 00:09:49.656 invalidate=1 00:09:49.656 rw=write 00:09:49.656 time_based=1 00:09:49.656 runtime=1 00:09:49.656 ioengine=libaio 00:09:49.656 direct=1 00:09:49.656 bs=4096 00:09:49.656 iodepth=1 00:09:49.656 norandommap=0 00:09:49.656 numjobs=1 00:09:49.656 00:09:49.656 verify_dump=1 00:09:49.656 verify_backlog=512 00:09:49.656 verify_state_save=0 00:09:49.656 do_verify=1 00:09:49.656 verify=crc32c-intel 00:09:49.656 [job0] 00:09:49.656 filename=/dev/nvme0n1 00:09:49.656 Could not set queue depth (nvme0n1) 00:09:49.656 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.656 fio-3.35 00:09:49.656 Starting 1 thread 00:09:51.123 00:09:51.123 job0: (groupid=0, jobs=1): err= 0: pid=875996: Fri Oct 11 11:43:35 2024 00:09:51.123 read: IOPS=553, BW=2214KiB/s (2267kB/s)(2216KiB/1001msec) 00:09:51.123 slat (nsec): min=6891, max=44979, avg=25509.38, stdev=4497.23 00:09:51.123 clat (usec): min=155, max=1197, avg=917.55, stdev=180.08 00:09:51.123 lat (usec): min=162, max=1222, avg=943.06, stdev=182.05 00:09:51.123 clat percentiles (usec): 00:09:51.123 | 1.00th=[ 289], 5.00th=[ 515], 10.00th=[ 603], 20.00th=[ 881], 00:09:51.123 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:09:51.123 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:09:51.123 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1205], 99.95th=[ 1205], 00:09:51.123 | 99.99th=[ 1205] 00:09:51.123 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:51.123 slat (nsec): min=9245, max=67779, avg=26506.68, stdev=10715.96 00:09:51.123 clat (usec): min=92, max=891, avg=429.59, stdev=181.29 00:09:51.123 lat (usec): min=110, max=939, avg=456.09, stdev=185.41 00:09:51.123 clat percentiles (usec): 00:09:51.123 | 1.00th=[ 116], 5.00th=[ 196], 10.00th=[ 229], 20.00th=[ 265], 00:09:51.123 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 383], 60.00th=[ 510], 00:09:51.123 | 70.00th=[ 578], 80.00th=[ 627], 90.00th=[ 676], 95.00th=[ 701], 00:09:51.123 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 824], 99.95th=[ 889], 00:09:51.123 | 99.99th=[ 889] 00:09:51.123 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:51.123 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:51.123 lat (usec) : 100=0.13%, 250=10.71%, 500=29.47%, 750=28.83%, 1000=19.14% 00:09:51.123 lat (msec) : 2=11.72% 00:09:51.123 cpu : usr=2.90%, sys=4.90%, ctx=1578, majf=0, minf=1 00:09:51.123 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.123 issued rwts: total=554,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.123 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:51.123 00:09:51.123 Run status group 0 (all jobs): 00:09:51.123 READ: bw=2214KiB/s (2267kB/s), 2214KiB/s-2214KiB/s (2267kB/s-2267kB/s), io=2216KiB (2269kB), run=1001-1001msec 00:09:51.123 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:09:51.123 00:09:51.123 Disk stats (read/write): 00:09:51.123 nvme0n1: ios=562/934, merge=0/0, ticks=517/340, in_queue=857, util=93.59% 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.123 rmmod nvme_tcp 00:09:51.123 rmmod nvme_fabrics 00:09:51.123 rmmod nvme_keyring 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 874460 ']' 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 874460 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 874460 ']' 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 874460 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 874460 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 874460' 00:09:51.123 killing process with pid 874460 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 874460 00:09:51.123 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 874460 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.397 11:43:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.355 11:43:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.355 00:09:53.355 real 0m17.827s 00:09:53.355 user 0m49.832s 00:09:53.355 sys 0m6.476s 00:09:53.355 11:43:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.355 11:43:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.355 ************************************ 00:09:53.355 END TEST nvmf_nmic 00:09:53.355 ************************************ 00:09:53.355 11:43:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:53.355 11:43:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:53.355 11:43:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.355 11:43:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.355 ************************************ 00:09:53.355 START TEST nvmf_fio_target 00:09:53.355 ************************************ 00:09:53.356 11:43:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:53.616 * Looking for test storage... 00:09:53.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:53.616 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.617 --rc genhtml_branch_coverage=1 00:09:53.617 --rc genhtml_function_coverage=1 00:09:53.617 --rc genhtml_legend=1 00:09:53.617 --rc geninfo_all_blocks=1 00:09:53.617 --rc geninfo_unexecuted_blocks=1 00:09:53.617 00:09:53.617 ' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.617 --rc genhtml_branch_coverage=1 00:09:53.617 --rc genhtml_function_coverage=1 00:09:53.617 --rc genhtml_legend=1 00:09:53.617 --rc geninfo_all_blocks=1 00:09:53.617 --rc geninfo_unexecuted_blocks=1 00:09:53.617 00:09:53.617 ' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.617 --rc genhtml_branch_coverage=1 00:09:53.617 --rc genhtml_function_coverage=1 00:09:53.617 --rc genhtml_legend=1 00:09:53.617 --rc geninfo_all_blocks=1 00:09:53.617 --rc geninfo_unexecuted_blocks=1 00:09:53.617 00:09:53.617 ' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:53.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.617 --rc genhtml_branch_coverage=1 00:09:53.617 --rc genhtml_function_coverage=1 00:09:53.617 --rc genhtml_legend=1 00:09:53.617 --rc geninfo_all_blocks=1 00:09:53.617 --rc geninfo_unexecuted_blocks=1 00:09:53.617 00:09:53.617 ' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.617 11:43:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:01.753 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:01.754 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:01.754 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:01.754 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:01.754 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:10:01.754 00:10:01.754 --- 10.0.0.2 ping statistics --- 00:10:01.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.754 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:10:01.754 00:10:01.754 --- 10.0.0.1 ping statistics --- 00:10:01.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.754 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=880370 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 880370 00:10:01.754 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:01.755 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 880370 ']' 00:10:01.755 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.755 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:01.755 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.755 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:01.755 11:43:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.755 [2024-10-11 11:43:45.680632] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:01.755 [2024-10-11 11:43:45.680714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.755 [2024-10-11 11:43:45.768333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.755 [2024-10-11 11:43:45.821620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.755 [2024-10-11 11:43:45.821684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.755 [2024-10-11 11:43:45.821693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.755 [2024-10-11 11:43:45.821700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.755 [2024-10-11 11:43:45.821706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.755 [2024-10-11 11:43:45.824016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.755 [2024-10-11 11:43:45.824174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.755 [2024-10-11 11:43:45.824336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.755 [2024-10-11 11:43:45.824338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.016 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.016 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:02.016 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:02.016 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:02.016 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.016 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.016 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:02.276 [2024-10-11 11:43:46.717027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.276 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.536 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:02.536 11:43:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.797 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:02.797 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.797 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:02.797 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.058 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:03.058 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:03.318 11:43:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.577 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:03.577 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.837 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:03.837 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.837 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:03.837 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:04.096 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.356 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:04.356 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.616 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:04.616 11:43:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.616 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.876 [2024-10-11 11:43:49.333435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.876 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:05.138 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:05.138 11:43:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.051 11:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:07.051 11:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:07.051 11:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.051 11:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:07.051 11:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:07.051 11:43:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:08.966 11:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:08.966 11:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:08.966 11:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:08.966 11:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:08.966 11:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:08.966 11:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:08.966 11:43:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:08.966 [global] 00:10:08.966 thread=1 00:10:08.966 invalidate=1 00:10:08.966 rw=write 00:10:08.966 time_based=1 00:10:08.966 runtime=1 00:10:08.966 ioengine=libaio 00:10:08.966 direct=1 00:10:08.966 bs=4096 00:10:08.966 iodepth=1 00:10:08.966 norandommap=0 00:10:08.966 numjobs=1 00:10:08.966 00:10:08.966 verify_dump=1 00:10:08.966 verify_backlog=512 00:10:08.966 verify_state_save=0 00:10:08.966 do_verify=1 00:10:08.966 verify=crc32c-intel 00:10:08.966 [job0] 00:10:08.966 filename=/dev/nvme0n1 00:10:08.966 [job1] 00:10:08.966 filename=/dev/nvme0n2 00:10:08.966 [job2] 00:10:08.966 filename=/dev/nvme0n3 00:10:08.966 [job3] 00:10:08.966 filename=/dev/nvme0n4 00:10:08.966 Could not set queue depth (nvme0n1) 00:10:08.966 Could not set queue depth (nvme0n2) 00:10:08.966 Could not set queue depth (nvme0n3) 00:10:08.966 Could not set queue depth (nvme0n4) 00:10:09.226 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.226 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.226 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.226 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.226 fio-3.35 00:10:09.226 Starting 4 threads 00:10:10.624 00:10:10.624 job0: (groupid=0, jobs=1): err= 0: pid=882286: Fri Oct 11 11:43:54 2024 00:10:10.624 read: IOPS=17, BW=69.7KiB/s (71.4kB/s)(72.0KiB/1033msec) 00:10:10.624 slat (nsec): min=27219, max=31443, avg=27707.89, stdev=968.42 00:10:10.624 clat (usec): min=1140, max=42966, avg=39895.06, stdev=9679.59 00:10:10.624 lat (usec): min=1168, max=42993, avg=39922.77, stdev=9679.70 00:10:10.624 clat percentiles (usec): 00:10:10.624 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41681], 20.00th=[41681], 00:10:10.624 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:10.624 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:10:10.624 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:10.624 | 99.99th=[42730] 00:10:10.624 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:10:10.624 slat (nsec): min=9347, max=56710, avg=33338.82, stdev=8328.92 00:10:10.624 clat (usec): min=105, max=1310, avg=571.90, stdev=130.10 00:10:10.624 lat (usec): min=114, max=1350, avg=605.24, stdev=132.65 00:10:10.624 clat percentiles (usec): 00:10:10.624 | 1.00th=[ 243], 5.00th=[ 363], 10.00th=[ 392], 20.00th=[ 474], 00:10:10.624 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 619], 00:10:10.624 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 750], 00:10:10.624 | 99.00th=[ 832], 99.50th=[ 881], 99.90th=[ 1303], 99.95th=[ 1303], 00:10:10.624 | 99.99th=[ 1303] 00:10:10.624 bw ( KiB/s): min= 4096, max= 4096, per=40.62%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.624 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.624 lat (usec) : 250=1.13%, 500=24.91%, 750=66.04%, 1000=4.34% 00:10:10.624 lat (msec) : 2=0.38%, 50=3.21% 00:10:10.624 cpu : usr=0.58%, sys=2.62%, ctx=532, majf=0, minf=1 00:10:10.624 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.624 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.624 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.624 job1: (groupid=0, jobs=1): err= 0: pid=882287: Fri Oct 11 11:43:54 2024 00:10:10.624 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:10.624 slat (nsec): min=7299, max=44575, avg=26131.64, stdev=1628.30 00:10:10.624 clat (usec): min=549, max=1247, avg=960.06, stdev=87.56 00:10:10.624 lat (usec): min=574, max=1273, avg=986.19, stdev=87.73 00:10:10.624 clat percentiles (usec): 00:10:10.624 | 1.00th=[ 693], 5.00th=[ 783], 10.00th=[ 848], 20.00th=[ 906], 00:10:10.624 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 988], 00:10:10.624 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1074], 00:10:10.624 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:10.625 | 99.99th=[ 1254] 00:10:10.625 write: IOPS=767, BW=3069KiB/s (3143kB/s)(3072KiB/1001msec); 0 zone resets 00:10:10.625 slat (nsec): min=10055, max=60460, avg=31090.85, stdev=9229.97 00:10:10.625 clat (usec): min=214, max=1053, avg=596.32, stdev=130.97 00:10:10.625 lat (usec): min=226, max=1086, avg=627.41, stdev=134.37 00:10:10.625 clat percentiles (usec): 00:10:10.625 | 1.00th=[ 265], 5.00th=[ 367], 10.00th=[ 429], 20.00th=[ 494], 00:10:10.625 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:10:10.625 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 816], 00:10:10.625 | 99.00th=[ 898], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:10:10.625 | 99.99th=[ 1057] 00:10:10.625 bw ( KiB/s): min= 4096, max= 4096, per=40.62%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.625 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.625 lat (usec) : 250=0.31%, 500=12.11%, 750=42.19%, 1000=32.97% 00:10:10.625 lat (msec) : 2=12.42% 00:10:10.625 cpu : usr=1.70%, sys=4.10%, ctx=1282, majf=0, minf=1 00:10:10.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.625 issued rwts: total=512,768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.625 job2: (groupid=0, jobs=1): err= 0: pid=882288: Fri Oct 11 11:43:54 2024 00:10:10.625 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:10.625 slat (nsec): min=8048, max=60919, avg=26767.86, stdev=3319.16 00:10:10.625 clat (usec): min=520, max=1151, avg=954.13, stdev=86.23 00:10:10.625 lat (usec): min=547, max=1178, avg=980.90, stdev=86.18 00:10:10.625 clat percentiles (usec): 00:10:10.625 | 1.00th=[ 668], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 889], 00:10:10.625 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:10:10.625 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1074], 00:10:10.625 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1156], 99.95th=[ 1156], 00:10:10.625 | 99.99th=[ 1156] 00:10:10.625 write: IOPS=811, BW=3245KiB/s (3323kB/s)(3248KiB/1001msec); 0 zone resets 00:10:10.625 slat (usec): min=5, max=30986, avg=65.52, stdev=1086.59 00:10:10.625 clat (usec): min=113, max=862, avg=531.72, stdev=137.21 00:10:10.625 lat (usec): min=121, max=31671, avg=597.24, stdev=1101.41 00:10:10.625 clat percentiles (usec): 00:10:10.625 | 1.00th=[ 235], 5.00th=[ 289], 10.00th=[ 330], 20.00th=[ 424], 00:10:10.625 | 30.00th=[ 469], 40.00th=[ 506], 50.00th=[ 545], 60.00th=[ 578], 00:10:10.625 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 750], 00:10:10.625 | 99.00th=[ 807], 99.50th=[ 840], 99.90th=[ 865], 99.95th=[ 865], 00:10:10.625 | 99.99th=[ 865] 00:10:10.625 bw ( KiB/s): min= 4096, max= 4096, per=40.62%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.625 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.625 lat (usec) : 250=1.06%, 500=22.73%, 750=35.35%, 1000=30.14% 00:10:10.625 lat (msec) : 2=10.73% 00:10:10.625 cpu : usr=2.50%, sys=3.00%, ctx=1327, majf=0, minf=1 00:10:10.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.625 issued rwts: total=512,812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.625 job3: (groupid=0, jobs=1): err= 0: pid=882289: Fri Oct 11 11:43:54 2024 00:10:10.625 read: IOPS=20, BW=82.0KiB/s (84.0kB/s)(84.0KiB/1024msec) 00:10:10.625 slat (nsec): min=24640, max=25414, avg=24906.05, stdev=182.09 00:10:10.625 clat (usec): min=777, max=42178, avg=39179.05, stdev=8807.58 00:10:10.625 lat (usec): min=803, max=42203, avg=39203.96, stdev=8807.46 00:10:10.625 clat percentiles (usec): 00:10:10.625 | 1.00th=[ 775], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:10.625 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:10.625 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:10.625 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:10.625 | 99.99th=[42206] 00:10:10.625 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:10:10.625 slat (nsec): min=9392, max=58305, avg=26724.93, stdev=10256.57 00:10:10.625 clat (usec): min=104, max=873, avg=352.02, stdev=167.41 00:10:10.625 lat (usec): min=114, max=919, avg=378.74, stdev=172.30 00:10:10.625 clat percentiles (usec): 00:10:10.625 | 1.00th=[ 109], 5.00th=[ 118], 10.00th=[ 131], 20.00th=[ 204], 00:10:10.625 | 30.00th=[ 251], 40.00th=[ 281], 50.00th=[ 318], 60.00th=[ 379], 00:10:10.625 | 70.00th=[ 449], 80.00th=[ 510], 90.00th=[ 586], 95.00th=[ 627], 00:10:10.625 | 99.00th=[ 758], 99.50th=[ 799], 99.90th=[ 873], 99.95th=[ 873], 00:10:10.625 | 99.99th=[ 873] 00:10:10.625 bw ( KiB/s): min= 4096, max= 4096, per=40.62%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.625 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.625 lat (usec) : 250=28.33%, 500=47.47%, 750=18.95%, 1000=1.50% 00:10:10.625 lat (msec) : 50=3.75% 00:10:10.625 cpu : usr=0.59%, sys=1.47%, ctx=535, majf=0, minf=1 00:10:10.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.625 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.625 00:10:10.625 Run status group 0 (all jobs): 00:10:10.625 READ: bw=4116KiB/s (4215kB/s), 69.7KiB/s-2046KiB/s (71.4kB/s-2095kB/s), io=4252KiB (4354kB), run=1001-1033msec 00:10:10.625 WRITE: bw=9.85MiB/s (10.3MB/s), 1983KiB/s-3245KiB/s (2030kB/s-3323kB/s), io=10.2MiB (10.7MB), run=1001-1033msec 00:10:10.625 00:10:10.625 Disk stats (read/write): 00:10:10.625 nvme0n1: ios=35/512, merge=0/0, ticks=1352/232, in_queue=1584, util=84.17% 00:10:10.625 nvme0n2: ios=547/512, merge=0/0, ticks=1278/306, in_queue=1584, util=87.74% 00:10:10.625 nvme0n3: ios=566/532, merge=0/0, ticks=988/278, in_queue=1266, util=94.19% 00:10:10.625 nvme0n4: ios=73/512, merge=0/0, ticks=730/168, in_queue=898, util=97.01% 00:10:10.625 11:43:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:10.625 [global] 00:10:10.625 thread=1 00:10:10.625 invalidate=1 00:10:10.625 rw=randwrite 00:10:10.625 time_based=1 00:10:10.625 runtime=1 00:10:10.625 ioengine=libaio 00:10:10.625 direct=1 00:10:10.625 bs=4096 00:10:10.625 iodepth=1 00:10:10.625 norandommap=0 00:10:10.625 numjobs=1 00:10:10.625 00:10:10.625 verify_dump=1 00:10:10.625 verify_backlog=512 00:10:10.625 verify_state_save=0 00:10:10.625 do_verify=1 00:10:10.625 verify=crc32c-intel 00:10:10.625 [job0] 00:10:10.625 filename=/dev/nvme0n1 00:10:10.625 [job1] 00:10:10.625 filename=/dev/nvme0n2 00:10:10.625 [job2] 00:10:10.625 filename=/dev/nvme0n3 00:10:10.625 [job3] 00:10:10.625 filename=/dev/nvme0n4 00:10:10.625 Could not set queue depth (nvme0n1) 00:10:10.625 Could not set queue depth (nvme0n2) 00:10:10.625 Could not set queue depth (nvme0n3) 00:10:10.625 Could not set queue depth (nvme0n4) 00:10:10.885 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.885 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.885 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.885 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.885 fio-3.35 00:10:10.885 Starting 4 threads 00:10:12.294 00:10:12.294 job0: (groupid=0, jobs=1): err= 0: pid=882816: Fri Oct 11 11:43:56 2024 00:10:12.294 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:12.294 slat (nsec): min=7446, max=58075, avg=26151.03, stdev=4000.27 00:10:12.294 clat (usec): min=501, max=1222, avg=935.44, stdev=149.38 00:10:12.294 lat (usec): min=526, max=1248, avg=961.59, stdev=149.27 00:10:12.294 clat percentiles (usec): 00:10:12.294 | 1.00th=[ 603], 5.00th=[ 685], 10.00th=[ 734], 20.00th=[ 791], 00:10:12.294 | 30.00th=[ 840], 40.00th=[ 889], 50.00th=[ 955], 60.00th=[ 1012], 00:10:12.294 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:10:12.294 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:12.294 | 99.99th=[ 1221] 00:10:12.294 write: IOPS=712, BW=2849KiB/s (2918kB/s)(2852KiB/1001msec); 0 zone resets 00:10:12.294 slat (nsec): min=10159, max=53697, avg=32486.54, stdev=5883.70 00:10:12.294 clat (usec): min=257, max=1037, avg=665.52, stdev=157.46 00:10:12.294 lat (usec): min=289, max=1069, avg=698.00, stdev=158.10 00:10:12.294 clat percentiles (usec): 00:10:12.294 | 1.00th=[ 297], 5.00th=[ 408], 10.00th=[ 449], 20.00th=[ 519], 00:10:12.294 | 30.00th=[ 578], 40.00th=[ 627], 50.00th=[ 676], 60.00th=[ 709], 00:10:12.294 | 70.00th=[ 758], 80.00th=[ 807], 90.00th=[ 873], 95.00th=[ 914], 00:10:12.294 | 99.00th=[ 988], 99.50th=[ 1012], 99.90th=[ 1037], 99.95th=[ 1037], 00:10:12.294 | 99.99th=[ 1037] 00:10:12.294 bw ( KiB/s): min= 4096, max= 4096, per=38.32%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.294 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.294 lat (usec) : 500=9.06%, 750=36.00%, 1000=36.73% 00:10:12.294 lat (msec) : 2=18.20% 00:10:12.294 cpu : usr=1.90%, sys=3.70%, ctx=1227, majf=0, minf=1 00:10:12.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.294 issued rwts: total=512,713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.294 job1: (groupid=0, jobs=1): err= 0: pid=882817: Fri Oct 11 11:43:56 2024 00:10:12.294 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:12.294 slat (nsec): min=11701, max=41783, avg=25010.03, stdev=1741.88 00:10:12.294 clat (usec): min=645, max=1216, avg=986.99, stdev=87.89 00:10:12.294 lat (usec): min=670, max=1241, avg=1012.00, stdev=87.77 00:10:12.294 clat percentiles (usec): 00:10:12.294 | 1.00th=[ 742], 5.00th=[ 783], 10.00th=[ 865], 20.00th=[ 930], 00:10:12.294 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1004], 60.00th=[ 1020], 00:10:12.294 | 70.00th=[ 1037], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:10:12.294 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:12.294 | 99.99th=[ 1221] 00:10:12.294 write: IOPS=736, BW=2945KiB/s (3016kB/s)(2948KiB/1001msec); 0 zone resets 00:10:12.294 slat (nsec): min=9204, max=52629, avg=27992.57, stdev=8905.00 00:10:12.294 clat (usec): min=229, max=907, avg=613.37, stdev=104.24 00:10:12.294 lat (usec): min=256, max=917, avg=641.36, stdev=108.27 00:10:12.294 clat percentiles (usec): 00:10:12.294 | 1.00th=[ 347], 5.00th=[ 420], 10.00th=[ 469], 20.00th=[ 537], 00:10:12.294 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 660], 00:10:12.294 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 725], 95.00th=[ 750], 00:10:12.294 | 99.00th=[ 807], 99.50th=[ 824], 99.90th=[ 906], 99.95th=[ 906], 00:10:12.294 | 99.99th=[ 906] 00:10:12.294 bw ( KiB/s): min= 4096, max= 4096, per=38.32%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.294 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.294 lat (usec) : 250=0.16%, 500=8.73%, 750=47.40%, 1000=21.22% 00:10:12.294 lat (msec) : 2=22.50% 00:10:12.294 cpu : usr=2.00%, sys=3.40%, ctx=1249, majf=0, minf=1 00:10:12.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.294 issued rwts: total=512,737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.294 job2: (groupid=0, jobs=1): err= 0: pid=882818: Fri Oct 11 11:43:56 2024 00:10:12.294 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:12.294 slat (nsec): min=8726, max=42837, avg=26143.97, stdev=1518.03 00:10:12.294 clat (usec): min=424, max=1183, avg=961.00, stdev=106.26 00:10:12.294 lat (usec): min=450, max=1209, avg=987.14, stdev=106.21 00:10:12.294 clat percentiles (usec): 00:10:12.294 | 1.00th=[ 644], 5.00th=[ 766], 10.00th=[ 816], 20.00th=[ 889], 00:10:12.294 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 979], 60.00th=[ 1004], 00:10:12.294 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:10:12.294 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:10:12.294 | 99.99th=[ 1188] 00:10:12.294 write: IOPS=800, BW=3201KiB/s (3278kB/s)(3204KiB/1001msec); 0 zone resets 00:10:12.294 slat (nsec): min=9798, max=63888, avg=31601.52, stdev=7543.36 00:10:12.294 clat (usec): min=207, max=1038, avg=572.54, stdev=133.44 00:10:12.294 lat (usec): min=217, max=1072, avg=604.14, stdev=135.70 00:10:12.294 clat percentiles (usec): 00:10:12.294 | 1.00th=[ 255], 5.00th=[ 347], 10.00th=[ 388], 20.00th=[ 465], 00:10:12.294 | 30.00th=[ 498], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 611], 00:10:12.294 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 775], 00:10:12.294 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1037], 99.95th=[ 1037], 00:10:12.294 | 99.99th=[ 1037] 00:10:12.294 bw ( KiB/s): min= 4096, max= 4096, per=38.32%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.294 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.294 lat (usec) : 250=0.30%, 500=18.58%, 750=38.16%, 1000=26.73% 00:10:12.294 lat (msec) : 2=16.22% 00:10:12.294 cpu : usr=2.00%, sys=4.00%, ctx=1314, majf=0, minf=1 00:10:12.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.294 issued rwts: total=512,801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.294 job3: (groupid=0, jobs=1): err= 0: pid=882819: Fri Oct 11 11:43:56 2024 00:10:12.294 read: IOPS=16, BW=65.8KiB/s (67.3kB/s)(68.0KiB/1034msec) 00:10:12.294 slat (nsec): min=26634, max=27359, avg=26940.12, stdev=170.16 00:10:12.294 clat (usec): min=40895, max=42819, avg=41664.82, stdev=575.85 00:10:12.294 lat (usec): min=40922, max=42845, avg=41691.76, stdev=575.80 00:10:12.294 clat percentiles (usec): 00:10:12.294 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:12.295 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:12.295 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:12.295 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:12.295 | 99.99th=[42730] 00:10:12.295 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:10:12.295 slat (nsec): min=9112, max=65794, avg=30981.57, stdev=8366.84 00:10:12.295 clat (usec): min=231, max=917, avg=595.52, stdev=133.91 00:10:12.295 lat (usec): min=241, max=951, avg=626.50, stdev=136.34 00:10:12.295 clat percentiles (usec): 00:10:12.295 | 1.00th=[ 277], 5.00th=[ 363], 10.00th=[ 412], 20.00th=[ 482], 00:10:12.295 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 644], 00:10:12.295 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 807], 00:10:12.295 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 922], 99.95th=[ 922], 00:10:12.295 | 99.99th=[ 922] 00:10:12.295 bw ( KiB/s): min= 4096, max= 4096, per=38.32%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.295 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.295 lat (usec) : 250=0.38%, 500=23.63%, 750=61.25%, 1000=11.53% 00:10:12.295 lat (msec) : 50=3.21% 00:10:12.295 cpu : usr=0.97%, sys=2.03%, ctx=529, majf=0, minf=2 00:10:12.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.295 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.295 00:10:12.295 Run status group 0 (all jobs): 00:10:12.295 READ: bw=6008KiB/s (6152kB/s), 65.8KiB/s-2046KiB/s (67.3kB/s-2095kB/s), io=6212KiB (6361kB), run=1001-1034msec 00:10:12.295 WRITE: bw=10.4MiB/s (10.9MB/s), 1981KiB/s-3201KiB/s (2028kB/s-3278kB/s), io=10.8MiB (11.3MB), run=1001-1034msec 00:10:12.295 00:10:12.295 Disk stats (read/write): 00:10:12.295 nvme0n1: ios=513/512, merge=0/0, ticks=1414/321, in_queue=1735, util=96.79% 00:10:12.295 nvme0n2: ios=525/512, merge=0/0, ticks=521/305, in_queue=826, util=87.04% 00:10:12.295 nvme0n3: ios=536/528, merge=0/0, ticks=1419/289, in_queue=1708, util=97.04% 00:10:12.295 nvme0n4: ios=12/512, merge=0/0, ticks=499/224, in_queue=723, util=89.52% 00:10:12.295 11:43:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:12.295 [global] 00:10:12.295 thread=1 00:10:12.295 invalidate=1 00:10:12.295 rw=write 00:10:12.295 time_based=1 00:10:12.295 runtime=1 00:10:12.295 ioengine=libaio 00:10:12.295 direct=1 00:10:12.295 bs=4096 00:10:12.295 iodepth=128 00:10:12.295 norandommap=0 00:10:12.295 numjobs=1 00:10:12.295 00:10:12.295 verify_dump=1 00:10:12.295 verify_backlog=512 00:10:12.295 verify_state_save=0 00:10:12.295 do_verify=1 00:10:12.295 verify=crc32c-intel 00:10:12.295 [job0] 00:10:12.295 filename=/dev/nvme0n1 00:10:12.295 [job1] 00:10:12.295 filename=/dev/nvme0n2 00:10:12.295 [job2] 00:10:12.295 filename=/dev/nvme0n3 00:10:12.295 [job3] 00:10:12.295 filename=/dev/nvme0n4 00:10:12.295 Could not set queue depth (nvme0n1) 00:10:12.295 Could not set queue depth (nvme0n2) 00:10:12.295 Could not set queue depth (nvme0n3) 00:10:12.295 Could not set queue depth (nvme0n4) 00:10:12.557 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.557 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.557 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.557 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.557 fio-3.35 00:10:12.557 Starting 4 threads 00:10:13.975 00:10:13.975 job0: (groupid=0, jobs=1): err= 0: pid=883335: Fri Oct 11 11:43:58 2024 00:10:13.975 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:13.975 slat (nsec): min=887, max=10452k, avg=78936.49, stdev=545390.38 00:10:13.975 clat (usec): min=2470, max=33991, avg=10174.87, stdev=4900.44 00:10:13.975 lat (usec): min=2476, max=34017, avg=10253.81, stdev=4939.91 00:10:13.975 clat percentiles (usec): 00:10:13.975 | 1.00th=[ 3097], 5.00th=[ 3884], 10.00th=[ 5604], 20.00th=[ 6652], 00:10:13.975 | 30.00th=[ 7111], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[10421], 00:10:13.975 | 70.00th=[11863], 80.00th=[13042], 90.00th=[16319], 95.00th=[19006], 00:10:13.975 | 99.00th=[30016], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:10:13.975 | 99.99th=[33817] 00:10:13.975 write: IOPS=5843, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1003msec); 0 zone resets 00:10:13.975 slat (nsec): min=1530, max=12078k, avg=88269.79, stdev=542034.13 00:10:13.975 clat (usec): min=578, max=45397, avg=11942.42, stdev=8170.92 00:10:13.975 lat (usec): min=1221, max=45418, avg=12030.69, stdev=8227.36 00:10:13.975 clat percentiles (usec): 00:10:13.975 | 1.00th=[ 3326], 5.00th=[ 4752], 10.00th=[ 6259], 20.00th=[ 6587], 00:10:13.975 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 8225], 60.00th=[ 9896], 00:10:13.975 | 70.00th=[12649], 80.00th=[16450], 90.00th=[23462], 95.00th=[32113], 00:10:13.975 | 99.00th=[40633], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:10:13.975 | 99.99th=[45351] 00:10:13.975 bw ( KiB/s): min=21240, max=24624, per=25.45%, avg=22932.00, stdev=2392.85, samples=2 00:10:13.975 iops : min= 5310, max= 6156, avg=5733.00, stdev=598.21, samples=2 00:10:13.975 lat (usec) : 750=0.01% 00:10:13.975 lat (msec) : 2=0.02%, 4=4.95%, 10=54.22%, 20=32.30%, 50=8.50% 00:10:13.975 cpu : usr=3.09%, sys=7.68%, ctx=449, majf=0, minf=2 00:10:13.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:13.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.975 issued rwts: total=5632,5861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.975 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.975 job1: (groupid=0, jobs=1): err= 0: pid=883336: Fri Oct 11 11:43:58 2024 00:10:13.975 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:13.975 slat (nsec): min=893, max=13179k, avg=95316.32, stdev=601219.18 00:10:13.975 clat (usec): min=3963, max=31804, avg=12688.95, stdev=4741.54 00:10:13.975 lat (usec): min=3969, max=31810, avg=12784.27, stdev=4785.22 00:10:13.975 clat percentiles (usec): 00:10:13.975 | 1.00th=[ 5473], 5.00th=[ 6718], 10.00th=[ 7898], 20.00th=[ 8586], 00:10:13.975 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11469], 60.00th=[12387], 00:10:13.975 | 70.00th=[14091], 80.00th=[17433], 90.00th=[19792], 95.00th=[21365], 00:10:13.975 | 99.00th=[24511], 99.50th=[27919], 99.90th=[31851], 99.95th=[31851], 00:10:13.975 | 99.99th=[31851] 00:10:13.975 write: IOPS=5151, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1003msec); 0 zone resets 00:10:13.975 slat (nsec): min=1547, max=19044k, avg=93061.45, stdev=645204.07 00:10:13.975 clat (usec): min=551, max=43619, avg=11747.18, stdev=6177.00 00:10:13.975 lat (usec): min=2434, max=43634, avg=11840.24, stdev=6225.43 00:10:13.975 clat percentiles (usec): 00:10:13.975 | 1.00th=[ 3949], 5.00th=[ 5407], 10.00th=[ 7308], 20.00th=[ 8094], 00:10:13.975 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10421], 00:10:13.975 | 70.00th=[11863], 80.00th=[14484], 90.00th=[19268], 95.00th=[24773], 00:10:13.975 | 99.00th=[42206], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:13.975 | 99.99th=[43779] 00:10:13.975 bw ( KiB/s): min=16384, max=24576, per=22.73%, avg=20480.00, stdev=5792.62, samples=2 00:10:13.975 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:10:13.975 lat (usec) : 750=0.01% 00:10:13.975 lat (msec) : 4=0.98%, 10=44.02%, 20=46.20%, 50=8.79% 00:10:13.975 cpu : usr=4.99%, sys=4.99%, ctx=321, majf=0, minf=2 00:10:13.975 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:13.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.975 issued rwts: total=5120,5167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.975 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.975 job2: (groupid=0, jobs=1): err= 0: pid=883345: Fri Oct 11 11:43:58 2024 00:10:13.975 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:10:13.975 slat (nsec): min=954, max=10398k, avg=72518.41, stdev=544402.85 00:10:13.975 clat (usec): min=2265, max=31892, avg=9365.24, stdev=4121.22 00:10:13.975 lat (usec): min=2285, max=31893, avg=9437.76, stdev=4161.01 00:10:13.975 clat percentiles (usec): 00:10:13.975 | 1.00th=[ 3785], 5.00th=[ 4424], 10.00th=[ 5014], 20.00th=[ 6194], 00:10:13.975 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8586], 60.00th=[ 9241], 00:10:13.975 | 70.00th=[ 9896], 80.00th=[12518], 90.00th=[15139], 95.00th=[17171], 00:10:13.975 | 99.00th=[26346], 99.50th=[27395], 99.90th=[31851], 99.95th=[31851], 00:10:13.975 | 99.99th=[31851] 00:10:13.975 write: IOPS=6675, BW=26.1MiB/s (27.3MB/s)(26.2MiB/1004msec); 0 zone resets 00:10:13.975 slat (nsec): min=1615, max=9569.0k, avg=69126.29, stdev=441428.35 00:10:13.975 clat (usec): min=497, max=34889, avg=9673.23, stdev=5718.57 00:10:13.975 lat (usec): min=532, max=36145, avg=9742.35, stdev=5750.26 00:10:13.975 clat percentiles (usec): 00:10:13.976 | 1.00th=[ 2638], 5.00th=[ 3949], 10.00th=[ 4621], 20.00th=[ 6063], 00:10:13.976 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 8029], 60.00th=[ 9241], 00:10:13.976 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[17433], 95.00th=[22938], 00:10:13.976 | 99.00th=[31327], 99.50th=[33162], 99.90th=[34341], 99.95th=[34866], 00:10:13.976 | 99.99th=[34866] 00:10:13.976 bw ( KiB/s): min=26568, max=26680, per=29.55%, avg=26624.00, stdev=79.20, samples=2 00:10:13.976 iops : min= 6642, max= 6670, avg=6656.00, stdev=19.80, samples=2 00:10:13.976 lat (usec) : 500=0.01%, 750=0.02% 00:10:13.976 lat (msec) : 2=0.32%, 4=3.11%, 10=67.62%, 20=24.30%, 50=4.62% 00:10:13.976 cpu : usr=4.69%, sys=7.78%, ctx=497, majf=0, minf=1 00:10:13.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:13.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.976 issued rwts: total=6656,6702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.976 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.976 job3: (groupid=0, jobs=1): err= 0: pid=883346: Fri Oct 11 11:43:58 2024 00:10:13.976 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:13.976 slat (nsec): min=934, max=12438k, avg=115010.10, stdev=785989.34 00:10:13.976 clat (usec): min=3535, max=68803, avg=15918.78, stdev=7401.78 00:10:13.976 lat (usec): min=3551, max=74452, avg=16033.79, stdev=7467.58 00:10:13.976 clat percentiles (usec): 00:10:13.976 | 1.00th=[ 5211], 5.00th=[ 7898], 10.00th=[ 8717], 20.00th=[ 9503], 00:10:13.976 | 30.00th=[11863], 40.00th=[13042], 50.00th=[14877], 60.00th=[16188], 00:10:13.976 | 70.00th=[17957], 80.00th=[21365], 90.00th=[23987], 95.00th=[26870], 00:10:13.976 | 99.00th=[39584], 99.50th=[39584], 99.90th=[68682], 99.95th=[68682], 00:10:13.976 | 99.99th=[68682] 00:10:13.976 write: IOPS=4872, BW=19.0MiB/s (20.0MB/s)(19.1MiB/1003msec); 0 zone resets 00:10:13.976 slat (nsec): min=1612, max=8081.5k, avg=84596.44, stdev=571874.06 00:10:13.976 clat (usec): min=853, max=28956, avg=11020.63, stdev=4308.87 00:10:13.976 lat (usec): min=863, max=28958, avg=11105.23, stdev=4357.84 00:10:13.976 clat percentiles (usec): 00:10:13.976 | 1.00th=[ 4490], 5.00th=[ 5342], 10.00th=[ 5735], 20.00th=[ 7046], 00:10:13.976 | 30.00th=[ 8160], 40.00th=[ 9503], 50.00th=[10552], 60.00th=[11469], 00:10:13.976 | 70.00th=[12780], 80.00th=[14746], 90.00th=[17171], 95.00th=[19006], 00:10:13.976 | 99.00th=[21627], 99.50th=[23200], 99.90th=[24773], 99.95th=[26346], 00:10:13.976 | 99.99th=[28967] 00:10:13.976 bw ( KiB/s): min=16384, max=21696, per=21.13%, avg=19040.00, stdev=3756.15, samples=2 00:10:13.976 iops : min= 4096, max= 5424, avg=4760.00, stdev=939.04, samples=2 00:10:13.976 lat (usec) : 1000=0.03% 00:10:13.976 lat (msec) : 2=0.20%, 4=0.48%, 10=32.70%, 20=53.09%, 50=13.26% 00:10:13.976 lat (msec) : 100=0.23% 00:10:13.976 cpu : usr=4.09%, sys=4.89%, ctx=306, majf=0, minf=1 00:10:13.976 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:13.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.976 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.976 issued rwts: total=4608,4887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.976 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.976 00:10:13.976 Run status group 0 (all jobs): 00:10:13.976 READ: bw=85.7MiB/s (89.8MB/s), 17.9MiB/s-25.9MiB/s (18.8MB/s-27.2MB/s), io=86.0MiB (90.2MB), run=1003-1004msec 00:10:13.976 WRITE: bw=88.0MiB/s (92.3MB/s), 19.0MiB/s-26.1MiB/s (20.0MB/s-27.3MB/s), io=88.3MiB (92.6MB), run=1003-1004msec 00:10:13.976 00:10:13.976 Disk stats (read/write): 00:10:13.976 nvme0n1: ios=4246/4608, merge=0/0, ticks=22593/28887, in_queue=51480, util=82.16% 00:10:13.976 nvme0n2: ios=3607/3838, merge=0/0, ticks=24376/20895, in_queue=45271, util=82.11% 00:10:13.976 nvme0n3: ios=5140/5135, merge=0/0, ticks=36487/34949, in_queue=71436, util=100.00% 00:10:13.976 nvme0n4: ios=3189/3584, merge=0/0, ticks=28767/22314, in_queue=51081, util=88.79% 00:10:13.976 11:43:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:13.976 [global] 00:10:13.976 thread=1 00:10:13.976 invalidate=1 00:10:13.976 rw=randwrite 00:10:13.976 time_based=1 00:10:13.976 runtime=1 00:10:13.976 ioengine=libaio 00:10:13.976 direct=1 00:10:13.976 bs=4096 00:10:13.976 iodepth=128 00:10:13.976 norandommap=0 00:10:13.976 numjobs=1 00:10:13.976 00:10:13.976 verify_dump=1 00:10:13.976 verify_backlog=512 00:10:13.976 verify_state_save=0 00:10:13.976 do_verify=1 00:10:13.976 verify=crc32c-intel 00:10:13.976 [job0] 00:10:13.976 filename=/dev/nvme0n1 00:10:13.976 [job1] 00:10:13.976 filename=/dev/nvme0n2 00:10:13.976 [job2] 00:10:13.976 filename=/dev/nvme0n3 00:10:13.976 [job3] 00:10:13.976 filename=/dev/nvme0n4 00:10:13.976 Could not set queue depth (nvme0n1) 00:10:13.976 Could not set queue depth (nvme0n2) 00:10:13.976 Could not set queue depth (nvme0n3) 00:10:13.976 Could not set queue depth (nvme0n4) 00:10:14.241 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.241 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.241 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.241 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.241 fio-3.35 00:10:14.241 Starting 4 threads 00:10:15.655 00:10:15.655 job0: (groupid=0, jobs=1): err= 0: pid=883865: Fri Oct 11 11:43:59 2024 00:10:15.655 read: IOPS=5919, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1005msec) 00:10:15.655 slat (nsec): min=878, max=14603k, avg=81209.71, stdev=582455.44 00:10:15.655 clat (usec): min=2604, max=34953, avg=10210.40, stdev=4162.45 00:10:15.655 lat (usec): min=4182, max=34960, avg=10291.61, stdev=4201.65 00:10:15.655 clat percentiles (usec): 00:10:15.655 | 1.00th=[ 5211], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 7308], 00:10:15.655 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:10:15.655 | 70.00th=[10552], 80.00th=[11731], 90.00th=[13960], 95.00th=[18482], 00:10:15.655 | 99.00th=[29230], 99.50th=[30016], 99.90th=[33162], 99.95th=[34866], 00:10:15.655 | 99.99th=[34866] 00:10:15.655 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:10:15.655 slat (nsec): min=1475, max=10588k, avg=80028.55, stdev=476205.92 00:10:15.655 clat (usec): min=1218, max=34945, avg=10857.15, stdev=5875.06 00:10:15.655 lat (usec): min=1229, max=34954, avg=10937.18, stdev=5912.86 00:10:15.655 clat percentiles (usec): 00:10:15.655 | 1.00th=[ 2835], 5.00th=[ 4490], 10.00th=[ 5866], 20.00th=[ 7046], 00:10:15.655 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9634], 00:10:15.655 | 70.00th=[10814], 80.00th=[14222], 90.00th=[19268], 95.00th=[25560], 00:10:15.655 | 99.00th=[28705], 99.50th=[28967], 99.90th=[31065], 99.95th=[32900], 00:10:15.655 | 99.99th=[34866] 00:10:15.655 bw ( KiB/s): min=23696, max=25456, per=25.00%, avg=24576.00, stdev=1244.51, samples=2 00:10:15.655 iops : min= 5924, max= 6364, avg=6144.00, stdev=311.13, samples=2 00:10:15.655 lat (msec) : 2=0.12%, 4=0.95%, 10=62.14%, 20=29.92%, 50=6.88% 00:10:15.655 cpu : usr=4.08%, sys=4.08%, ctx=497, majf=0, minf=2 00:10:15.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:15.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.655 issued rwts: total=5949,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.655 job1: (groupid=0, jobs=1): err= 0: pid=883866: Fri Oct 11 11:43:59 2024 00:10:15.655 read: IOPS=6162, BW=24.1MiB/s (25.2MB/s)(24.2MiB/1004msec) 00:10:15.655 slat (nsec): min=883, max=14884k, avg=74341.92, stdev=479040.28 00:10:15.655 clat (usec): min=3099, max=31103, avg=9424.04, stdev=3598.18 00:10:15.655 lat (usec): min=4297, max=31125, avg=9498.38, stdev=3632.75 00:10:15.655 clat percentiles (usec): 00:10:15.655 | 1.00th=[ 5735], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 7570], 00:10:15.655 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8455], 00:10:15.655 | 70.00th=[ 8979], 80.00th=[10421], 90.00th=[13042], 95.00th=[15926], 00:10:15.655 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:10:15.655 | 99.99th=[31065] 00:10:15.655 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:10:15.655 slat (nsec): min=1480, max=16296k, avg=76513.48, stdev=519954.65 00:10:15.655 clat (usec): min=4004, max=46708, avg=10329.49, stdev=4835.09 00:10:15.656 lat (usec): min=4013, max=46738, avg=10406.00, stdev=4879.42 00:10:15.656 clat percentiles (usec): 00:10:15.656 | 1.00th=[ 4293], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7242], 00:10:15.656 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:10:15.656 | 70.00th=[11338], 80.00th=[12780], 90.00th=[15795], 95.00th=[19530], 00:10:15.656 | 99.00th=[30540], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:10:15.656 | 99.99th=[46924] 00:10:15.656 bw ( KiB/s): min=20752, max=31824, per=26.74%, avg=26288.00, stdev=7829.09, samples=2 00:10:15.656 iops : min= 5188, max= 7956, avg=6572.00, stdev=1957.27, samples=2 00:10:15.656 lat (msec) : 4=0.01%, 10=71.13%, 20=25.75%, 50=3.11% 00:10:15.656 cpu : usr=3.39%, sys=5.88%, ctx=492, majf=0, minf=1 00:10:15.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.656 issued rwts: total=6187,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.656 job2: (groupid=0, jobs=1): err= 0: pid=883867: Fri Oct 11 11:43:59 2024 00:10:15.656 read: IOPS=4746, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1004msec) 00:10:15.656 slat (nsec): min=949, max=22312k, avg=103448.79, stdev=788960.59 00:10:15.656 clat (usec): min=1279, max=47591, avg=12955.35, stdev=7352.71 00:10:15.656 lat (usec): min=4597, max=50165, avg=13058.80, stdev=7434.26 00:10:15.656 clat percentiles (usec): 00:10:15.656 | 1.00th=[ 6194], 5.00th=[ 7373], 10.00th=[ 8094], 20.00th=[ 8717], 00:10:15.656 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[10814], 00:10:15.656 | 70.00th=[11863], 80.00th=[15270], 90.00th=[21890], 95.00th=[31589], 00:10:15.656 | 99.00th=[41157], 99.50th=[43779], 99.90th=[47449], 99.95th=[47449], 00:10:15.656 | 99.99th=[47449] 00:10:15.656 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:15.656 slat (nsec): min=1646, max=12763k, avg=93662.49, stdev=556395.33 00:10:15.656 clat (usec): min=790, max=68562, avg=12698.33, stdev=9908.43 00:10:15.656 lat (usec): min=818, max=68576, avg=12791.99, stdev=9975.90 00:10:15.656 clat percentiles (usec): 00:10:15.656 | 1.00th=[ 5669], 5.00th=[ 7177], 10.00th=[ 8225], 20.00th=[ 8979], 00:10:15.656 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:10:15.656 | 70.00th=[10814], 80.00th=[12911], 90.00th=[18220], 95.00th=[31589], 00:10:15.656 | 99.00th=[64226], 99.50th=[66847], 99.90th=[68682], 99.95th=[68682], 00:10:15.656 | 99.99th=[68682] 00:10:15.656 bw ( KiB/s): min=14360, max=26600, per=20.83%, avg=20480.00, stdev=8654.99, samples=2 00:10:15.656 iops : min= 3590, max= 6650, avg=5120.00, stdev=2163.75, samples=2 00:10:15.656 lat (usec) : 1000=0.02% 00:10:15.656 lat (msec) : 2=0.01%, 10=51.54%, 20=37.49%, 50=9.59%, 100=1.35% 00:10:15.656 cpu : usr=3.59%, sys=5.08%, ctx=396, majf=0, minf=1 00:10:15.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.656 issued rwts: total=4765,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.656 job3: (groupid=0, jobs=1): err= 0: pid=883868: Fri Oct 11 11:43:59 2024 00:10:15.656 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:10:15.656 slat (nsec): min=924, max=10854k, avg=71375.23, stdev=500453.75 00:10:15.656 clat (usec): min=3683, max=23987, avg=9578.25, stdev=2263.12 00:10:15.656 lat (usec): min=3716, max=23997, avg=9649.63, stdev=2304.09 00:10:15.656 clat percentiles (usec): 00:10:15.656 | 1.00th=[ 4883], 5.00th=[ 6849], 10.00th=[ 7701], 20.00th=[ 8160], 00:10:15.656 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9634], 00:10:15.656 | 70.00th=[10028], 80.00th=[11338], 90.00th=[12387], 95.00th=[12911], 00:10:15.656 | 99.00th=[17695], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:10:15.656 | 99.99th=[23987] 00:10:15.656 write: IOPS=6755, BW=26.4MiB/s (27.7MB/s)(26.5MiB/1004msec); 0 zone resets 00:10:15.656 slat (nsec): min=1526, max=11291k, avg=69014.46, stdev=474043.30 00:10:15.656 clat (usec): min=1173, max=25009, avg=9352.34, stdev=3153.30 00:10:15.656 lat (usec): min=1180, max=25012, avg=9421.35, stdev=3186.52 00:10:15.656 clat percentiles (usec): 00:10:15.656 | 1.00th=[ 1532], 5.00th=[ 4424], 10.00th=[ 5669], 20.00th=[ 7767], 00:10:15.656 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 9110], 00:10:15.656 | 70.00th=[ 9896], 80.00th=[11469], 90.00th=[13304], 95.00th=[15664], 00:10:15.656 | 99.00th=[18482], 99.50th=[20317], 99.90th=[22152], 99.95th=[22676], 00:10:15.656 | 99.99th=[25035] 00:10:15.656 bw ( KiB/s): min=24696, max=28560, per=27.08%, avg=26628.00, stdev=2732.26, samples=2 00:10:15.656 iops : min= 6174, max= 7140, avg=6657.00, stdev=683.07, samples=2 00:10:15.656 lat (msec) : 2=0.60%, 4=1.40%, 10=68.02%, 20=29.36%, 50=0.63% 00:10:15.656 cpu : usr=5.08%, sys=7.18%, ctx=435, majf=0, minf=1 00:10:15.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.656 issued rwts: total=6656,6783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.656 00:10:15.656 Run status group 0 (all jobs): 00:10:15.656 READ: bw=91.6MiB/s (96.0MB/s), 18.5MiB/s-25.9MiB/s (19.4MB/s-27.2MB/s), io=92.0MiB (96.5MB), run=1004-1005msec 00:10:15.656 WRITE: bw=96.0MiB/s (101MB/s), 19.9MiB/s-26.4MiB/s (20.9MB/s-27.7MB/s), io=96.5MiB (101MB), run=1004-1005msec 00:10:15.656 00:10:15.656 Disk stats (read/write): 00:10:15.656 nvme0n1: ios=4726/5120, merge=0/0, ticks=32016/38862, in_queue=70878, util=87.78% 00:10:15.656 nvme0n2: ios=5158/5335, merge=0/0, ticks=18346/19477, in_queue=37823, util=96.43% 00:10:15.656 nvme0n3: ios=4341/4608, merge=0/0, ticks=25222/25535, in_queue=50757, util=97.05% 00:10:15.656 nvme0n4: ios=5355/5632, merge=0/0, ticks=28830/27939, in_queue=56769, util=92.00% 00:10:15.656 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:15.656 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=884052 00:10:15.656 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:15.656 11:43:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:15.656 [global] 00:10:15.656 thread=1 00:10:15.656 invalidate=1 00:10:15.656 rw=read 00:10:15.656 time_based=1 00:10:15.656 runtime=10 00:10:15.656 ioengine=libaio 00:10:15.656 direct=1 00:10:15.656 bs=4096 00:10:15.656 iodepth=1 00:10:15.656 norandommap=1 00:10:15.656 numjobs=1 00:10:15.656 00:10:15.656 [job0] 00:10:15.656 filename=/dev/nvme0n1 00:10:15.656 [job1] 00:10:15.656 filename=/dev/nvme0n2 00:10:15.656 [job2] 00:10:15.656 filename=/dev/nvme0n3 00:10:15.656 [job3] 00:10:15.656 filename=/dev/nvme0n4 00:10:15.656 Could not set queue depth (nvme0n1) 00:10:15.656 Could not set queue depth (nvme0n2) 00:10:15.656 Could not set queue depth (nvme0n3) 00:10:15.656 Could not set queue depth (nvme0n4) 00:10:15.917 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.917 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.917 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.917 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.917 fio-3.35 00:10:15.917 Starting 4 threads 00:10:18.455 11:44:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:18.715 11:44:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:18.715 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13107200, buflen=4096 00:10:18.715 fio: pid=884396, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:18.715 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11685888, buflen=4096 00:10:18.715 fio: pid=884395, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:18.715 11:44:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.715 11:44:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:18.976 11:44:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.976 11:44:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:18.976 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=1798144, buflen=4096 00:10:18.976 fio: pid=884392, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:19.237 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2093056, buflen=4096 00:10:19.237 fio: pid=884393, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:19.237 11:44:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.237 11:44:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:19.237 00:10:19.237 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=884392: Fri Oct 11 11:44:03 2024 00:10:19.237 read: IOPS=148, BW=591KiB/s (605kB/s)(1756KiB/2971msec) 00:10:19.237 slat (usec): min=5, max=13450, avg=60.00, stdev=645.22 00:10:19.237 clat (usec): min=635, max=44349, avg=6699.98, stdev=14121.59 00:10:19.237 lat (usec): min=677, max=46104, avg=6760.05, stdev=14134.35 00:10:19.237 clat percentiles (usec): 00:10:19.237 | 1.00th=[ 791], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1020], 00:10:19.237 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:10:19.237 | 70.00th=[ 1188], 80.00th=[ 1237], 90.00th=[41681], 95.00th=[42206], 00:10:19.237 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:10:19.237 | 99.99th=[44303] 00:10:19.237 bw ( KiB/s): min= 272, max= 920, per=7.01%, avg=622.40, stdev=236.60, samples=5 00:10:19.237 iops : min= 68, max= 230, avg=155.60, stdev=59.15, samples=5 00:10:19.237 lat (usec) : 750=0.91%, 1000=13.41% 00:10:19.237 lat (msec) : 2=71.82%, 50=13.64% 00:10:19.237 cpu : usr=0.27%, sys=0.34%, ctx=442, majf=0, minf=1 00:10:19.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.237 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.237 issued rwts: total=440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.237 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=884393: Fri Oct 11 11:44:03 2024 00:10:19.237 read: IOPS=162, BW=647KiB/s (663kB/s)(2044KiB/3158msec) 00:10:19.237 slat (usec): min=6, max=13379, avg=109.35, stdev=969.27 00:10:19.237 clat (usec): min=295, max=43002, avg=6062.34, stdev=13732.07 00:10:19.237 lat (usec): min=320, max=43027, avg=6171.85, stdev=13735.33 00:10:19.237 clat percentiles (usec): 00:10:19.237 | 1.00th=[ 441], 5.00th=[ 537], 10.00th=[ 586], 20.00th=[ 668], 00:10:19.237 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 840], 60.00th=[ 955], 00:10:19.237 | 70.00th=[ 1057], 80.00th=[ 1123], 90.00th=[41681], 95.00th=[42206], 00:10:19.237 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:10:19.237 | 99.99th=[43254] 00:10:19.237 bw ( KiB/s): min= 88, max= 2096, per=6.91%, avg=613.33, stdev=850.08, samples=6 00:10:19.237 iops : min= 22, max= 524, avg=153.33, stdev=212.52, samples=6 00:10:19.237 lat (usec) : 500=2.34%, 750=39.84%, 1000=22.07% 00:10:19.237 lat (msec) : 2=22.66%, 4=0.20%, 50=12.70% 00:10:19.237 cpu : usr=0.19%, sys=0.48%, ctx=517, majf=0, minf=2 00:10:19.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.237 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.237 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.237 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=884395: Fri Oct 11 11:44:03 2024 00:10:19.237 read: IOPS=1038, BW=4151KiB/s (4251kB/s)(11.1MiB/2749msec) 00:10:19.237 slat (usec): min=6, max=18862, avg=37.54, stdev=398.48 00:10:19.237 clat (usec): min=190, max=1250, avg=919.24, stdev=94.83 00:10:19.237 lat (usec): min=197, max=19852, avg=956.78, stdev=411.23 00:10:19.237 clat percentiles (usec): 00:10:19.237 | 1.00th=[ 635], 5.00th=[ 742], 10.00th=[ 799], 20.00th=[ 857], 00:10:19.237 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 938], 60.00th=[ 947], 00:10:19.237 | 70.00th=[ 963], 80.00th=[ 979], 90.00th=[ 1020], 95.00th=[ 1057], 00:10:19.237 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1237], 99.95th=[ 1237], 00:10:19.237 | 99.99th=[ 1254] 00:10:19.237 bw ( KiB/s): min= 4144, max= 4304, per=47.36%, avg=4201.60, stdev=64.35, samples=5 00:10:19.237 iops : min= 1036, max= 1076, avg=1050.40, stdev=16.09, samples=5 00:10:19.237 lat (usec) : 250=0.04%, 500=0.11%, 750=5.99%, 1000=80.38% 00:10:19.237 lat (msec) : 2=13.45% 00:10:19.237 cpu : usr=2.15%, sys=3.97%, ctx=2856, majf=0, minf=2 00:10:19.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.238 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.238 issued rwts: total=2854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.238 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=884396: Fri Oct 11 11:44:03 2024 00:10:19.238 read: IOPS=1232, BW=4929KiB/s (5047kB/s)(12.5MiB/2597msec) 00:10:19.238 slat (nsec): min=6914, max=59698, avg=23671.10, stdev=7635.98 00:10:19.238 clat (usec): min=352, max=42516, avg=781.93, stdev=1270.47 00:10:19.238 lat (usec): min=371, max=42543, avg=805.60, stdev=1270.66 00:10:19.238 clat percentiles (usec): 00:10:19.238 | 1.00th=[ 494], 5.00th=[ 611], 10.00th=[ 652], 20.00th=[ 685], 00:10:19.238 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 758], 60.00th=[ 775], 00:10:19.238 | 70.00th=[ 783], 80.00th=[ 799], 90.00th=[ 816], 95.00th=[ 832], 00:10:19.238 | 99.00th=[ 881], 99.50th=[ 914], 99.90th=[ 1237], 99.95th=[42206], 00:10:19.238 | 99.99th=[42730] 00:10:19.238 bw ( KiB/s): min= 5016, max= 5184, per=57.67%, avg=5115.20, stdev=62.12, samples=5 00:10:19.238 iops : min= 1254, max= 1296, avg=1278.80, stdev=15.53, samples=5 00:10:19.238 lat (usec) : 500=1.12%, 750=41.42%, 1000=57.29% 00:10:19.238 lat (msec) : 2=0.03%, 50=0.09% 00:10:19.238 cpu : usr=1.43%, sys=3.20%, ctx=3201, majf=0, minf=2 00:10:19.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.238 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.238 issued rwts: total=3201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.238 00:10:19.238 Run status group 0 (all jobs): 00:10:19.238 READ: bw=8870KiB/s (9083kB/s), 591KiB/s-4929KiB/s (605kB/s-5047kB/s), io=27.4MiB (28.7MB), run=2597-3158msec 00:10:19.238 00:10:19.238 Disk stats (read/write): 00:10:19.238 nvme0n1: ios=419/0, merge=0/0, ticks=2793/0, in_queue=2793, util=94.26% 00:10:19.238 nvme0n2: ios=485/0, merge=0/0, ticks=3026/0, in_queue=3026, util=94.39% 00:10:19.238 nvme0n3: ios=2714/0, merge=0/0, ticks=2190/0, in_queue=2190, util=95.99% 00:10:19.238 nvme0n4: ios=3199/0, merge=0/0, ticks=2395/0, in_queue=2395, util=96.39% 00:10:19.499 11:44:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.499 11:44:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:19.499 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.499 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:19.759 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.759 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:20.020 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.020 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:20.020 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:20.020 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 884052 00:10:20.020 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:20.020 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:20.281 nvmf hotplug test: fio failed as expected 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.281 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.541 rmmod nvme_tcp 00:10:20.541 rmmod nvme_fabrics 00:10:20.541 rmmod nvme_keyring 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 880370 ']' 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 880370 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 880370 ']' 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 880370 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.541 11:44:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 880370 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 880370' 00:10:20.541 killing process with pid 880370 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 880370 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 880370 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.541 11:44:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:23.095 00:10:23.095 real 0m29.240s 00:10:23.095 user 2m38.867s 00:10:23.095 sys 0m9.556s 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.095 ************************************ 00:10:23.095 END TEST nvmf_fio_target 00:10:23.095 ************************************ 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.095 ************************************ 00:10:23.095 START TEST nvmf_bdevio 00:10:23.095 ************************************ 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:23.095 * Looking for test storage... 00:10:23.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:23.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.095 --rc genhtml_branch_coverage=1 00:10:23.095 --rc genhtml_function_coverage=1 00:10:23.095 --rc genhtml_legend=1 00:10:23.095 --rc geninfo_all_blocks=1 00:10:23.095 --rc geninfo_unexecuted_blocks=1 00:10:23.095 00:10:23.095 ' 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:23.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.095 --rc genhtml_branch_coverage=1 00:10:23.095 --rc genhtml_function_coverage=1 00:10:23.095 --rc genhtml_legend=1 00:10:23.095 --rc geninfo_all_blocks=1 00:10:23.095 --rc geninfo_unexecuted_blocks=1 00:10:23.095 00:10:23.095 ' 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:23.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.095 --rc genhtml_branch_coverage=1 00:10:23.095 --rc genhtml_function_coverage=1 00:10:23.095 --rc genhtml_legend=1 00:10:23.095 --rc geninfo_all_blocks=1 00:10:23.095 --rc geninfo_unexecuted_blocks=1 00:10:23.095 00:10:23.095 ' 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:23.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.095 --rc genhtml_branch_coverage=1 00:10:23.095 --rc genhtml_function_coverage=1 00:10:23.095 --rc genhtml_legend=1 00:10:23.095 --rc geninfo_all_blocks=1 00:10:23.095 --rc geninfo_unexecuted_blocks=1 00:10:23.095 00:10:23.095 ' 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.095 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.096 11:44:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:31.238 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:31.238 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:31.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:31.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.238 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:10:31.238 00:10:31.238 --- 10.0.0.2 ping statistics --- 00:10:31.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.238 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:10:31.239 00:10:31.239 --- 10.0.0.1 ping statistics --- 00:10:31.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.239 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:31.239 11:44:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=889434 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 889434 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 889434 ']' 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.239 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.239 [2024-10-11 11:44:15.073477] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:31.239 [2024-10-11 11:44:15.073539] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.239 [2024-10-11 11:44:15.162951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.239 [2024-10-11 11:44:15.215873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.239 [2024-10-11 11:44:15.215925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.239 [2024-10-11 11:44:15.215935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.239 [2024-10-11 11:44:15.215942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.239 [2024-10-11 11:44:15.215948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.239 [2024-10-11 11:44:15.218325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:31.239 [2024-10-11 11:44:15.218485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:31.239 [2024-10-11 11:44:15.218645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:31.239 [2024-10-11 11:44:15.218645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.499 [2024-10-11 11:44:15.954499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.499 Malloc0 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.499 11:44:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.499 [2024-10-11 11:44:16.031598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:31.499 { 00:10:31.499 "params": { 00:10:31.499 "name": "Nvme$subsystem", 00:10:31.499 "trtype": "$TEST_TRANSPORT", 00:10:31.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:31.499 "adrfam": "ipv4", 00:10:31.499 "trsvcid": "$NVMF_PORT", 00:10:31.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:31.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:31.499 "hdgst": ${hdgst:-false}, 00:10:31.499 "ddgst": ${ddgst:-false} 00:10:31.499 }, 00:10:31.499 "method": "bdev_nvme_attach_controller" 00:10:31.499 } 00:10:31.499 EOF 00:10:31.499 )") 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:31.499 11:44:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:31.499 "params": { 00:10:31.499 "name": "Nvme1", 00:10:31.499 "trtype": "tcp", 00:10:31.499 "traddr": "10.0.0.2", 00:10:31.499 "adrfam": "ipv4", 00:10:31.499 "trsvcid": "4420", 00:10:31.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:31.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:31.499 "hdgst": false, 00:10:31.499 "ddgst": false 00:10:31.499 }, 00:10:31.499 "method": "bdev_nvme_attach_controller" 00:10:31.499 }' 00:10:31.499 [2024-10-11 11:44:16.097517] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:10:31.499 [2024-10-11 11:44:16.097590] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889787 ] 00:10:31.760 [2024-10-11 11:44:16.181528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:31.760 [2024-10-11 11:44:16.238343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.760 [2024-10-11 11:44:16.238508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.760 [2024-10-11 11:44:16.238508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.020 I/O targets: 00:10:32.020 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:32.020 00:10:32.020 00:10:32.020 CUnit - A unit testing framework for C - Version 2.1-3 00:10:32.020 http://cunit.sourceforge.net/ 00:10:32.020 00:10:32.020 00:10:32.020 Suite: bdevio tests on: Nvme1n1 00:10:32.020 Test: blockdev write read block ...passed 00:10:32.281 Test: blockdev write zeroes read block ...passed 00:10:32.281 Test: blockdev write zeroes read no split ...passed 00:10:32.281 Test: blockdev write zeroes read split ...passed 00:10:32.281 Test: blockdev write zeroes read split partial ...passed 00:10:32.281 Test: blockdev reset ...[2024-10-11 11:44:16.738608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:32.281 [2024-10-11 11:44:16.738723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251c510 (9): Bad file descriptor 00:10:32.281 [2024-10-11 11:44:16.752139] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:32.281 passed 00:10:32.281 Test: blockdev write read 8 blocks ...passed 00:10:32.281 Test: blockdev write read size > 128k ...passed 00:10:32.281 Test: blockdev write read invalid size ...passed 00:10:32.281 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:32.281 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:32.281 Test: blockdev write read max offset ...passed 00:10:32.281 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:32.541 Test: blockdev writev readv 8 blocks ...passed 00:10:32.541 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.541 Test: blockdev writev readv block ...passed 00:10:32.541 Test: blockdev writev readv size > 128k ...passed 00:10:32.541 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.541 Test: blockdev comparev and writev ...[2024-10-11 11:44:17.019345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.541 [2024-10-11 11:44:17.019394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:32.541 [2024-10-11 11:44:17.019411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.541 [2024-10-11 11:44:17.019420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:32.541 [2024-10-11 11:44:17.020002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.541 [2024-10-11 11:44:17.020016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:32.541 [2024-10-11 11:44:17.020030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.542 [2024-10-11 11:44:17.020038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:32.542 [2024-10-11 11:44:17.020610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.542 [2024-10-11 11:44:17.020623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:32.542 [2024-10-11 11:44:17.020645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.542 [2024-10-11 11:44:17.020654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:32.542 [2024-10-11 11:44:17.021235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.542 [2024-10-11 11:44:17.021247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:32.542 [2024-10-11 11:44:17.021261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.542 [2024-10-11 11:44:17.021269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:32.542 passed 00:10:32.542 Test: blockdev nvme passthru rw ...passed 00:10:32.542 Test: blockdev nvme passthru vendor specific ...[2024-10-11 11:44:17.105499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:32.542 [2024-10-11 11:44:17.105516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:32.542 [2024-10-11 11:44:17.105873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:32.542 [2024-10-11 11:44:17.105884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:32.542 [2024-10-11 11:44:17.106258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:32.542 [2024-10-11 11:44:17.106268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:32.542 [2024-10-11 11:44:17.106641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:32.542 [2024-10-11 11:44:17.106652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:32.542 passed 00:10:32.542 Test: blockdev nvme admin passthru ...passed 00:10:32.542 Test: blockdev copy ...passed 00:10:32.542 00:10:32.542 Run Summary: Type Total Ran Passed Failed Inactive 00:10:32.542 suites 1 1 n/a 0 0 00:10:32.542 tests 23 23 23 0 0 00:10:32.542 asserts 152 152 152 0 n/a 00:10:32.542 00:10:32.542 Elapsed time = 1.204 seconds 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.802 rmmod nvme_tcp 00:10:32.802 rmmod nvme_fabrics 00:10:32.802 rmmod nvme_keyring 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 889434 ']' 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 889434 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 889434 ']' 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 889434 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 889434 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 889434' 00:10:32.802 killing process with pid 889434 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 889434 00:10:32.802 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 889434 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.063 11:44:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:35.608 00:10:35.608 real 0m12.308s 00:10:35.608 user 0m13.967s 00:10:35.608 sys 0m6.227s 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:35.608 ************************************ 00:10:35.608 END TEST nvmf_bdevio 00:10:35.608 ************************************ 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:35.608 00:10:35.608 real 5m4.462s 00:10:35.608 user 11m57.969s 00:10:35.608 sys 1m52.080s 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.608 ************************************ 00:10:35.608 END TEST nvmf_target_core 00:10:35.608 ************************************ 00:10:35.608 11:44:19 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:35.608 11:44:19 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.608 11:44:19 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.608 11:44:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.608 ************************************ 00:10:35.608 START TEST nvmf_target_extra 00:10:35.608 ************************************ 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:35.608 * Looking for test storage... 00:10:35.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:35.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.608 --rc genhtml_branch_coverage=1 00:10:35.608 --rc genhtml_function_coverage=1 00:10:35.608 --rc genhtml_legend=1 00:10:35.608 --rc geninfo_all_blocks=1 00:10:35.608 --rc geninfo_unexecuted_blocks=1 00:10:35.608 00:10:35.608 ' 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:35.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.608 --rc genhtml_branch_coverage=1 00:10:35.608 --rc genhtml_function_coverage=1 00:10:35.608 --rc genhtml_legend=1 00:10:35.608 --rc geninfo_all_blocks=1 00:10:35.608 --rc geninfo_unexecuted_blocks=1 00:10:35.608 00:10:35.608 ' 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:35.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.608 --rc genhtml_branch_coverage=1 00:10:35.608 --rc genhtml_function_coverage=1 00:10:35.608 --rc genhtml_legend=1 00:10:35.608 --rc geninfo_all_blocks=1 00:10:35.608 --rc geninfo_unexecuted_blocks=1 00:10:35.608 00:10:35.608 ' 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:35.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.608 --rc genhtml_branch_coverage=1 00:10:35.608 --rc genhtml_function_coverage=1 00:10:35.608 --rc genhtml_legend=1 00:10:35.608 --rc geninfo_all_blocks=1 00:10:35.608 --rc geninfo_unexecuted_blocks=1 00:10:35.608 00:10:35.608 ' 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.608 11:44:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.609 11:44:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:35.609 ************************************ 00:10:35.609 START TEST nvmf_example 00:10:35.609 ************************************ 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:35.609 * Looking for test storage... 00:10:35.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.609 --rc genhtml_branch_coverage=1 00:10:35.609 --rc genhtml_function_coverage=1 00:10:35.609 --rc genhtml_legend=1 00:10:35.609 --rc geninfo_all_blocks=1 00:10:35.609 --rc geninfo_unexecuted_blocks=1 00:10:35.609 00:10:35.609 ' 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.609 --rc genhtml_branch_coverage=1 00:10:35.609 --rc genhtml_function_coverage=1 00:10:35.609 --rc genhtml_legend=1 00:10:35.609 --rc geninfo_all_blocks=1 00:10:35.609 --rc geninfo_unexecuted_blocks=1 00:10:35.609 00:10:35.609 ' 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.609 --rc genhtml_branch_coverage=1 00:10:35.609 --rc genhtml_function_coverage=1 00:10:35.609 --rc genhtml_legend=1 00:10:35.609 --rc geninfo_all_blocks=1 00:10:35.609 --rc geninfo_unexecuted_blocks=1 00:10:35.609 00:10:35.609 ' 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:35.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.609 --rc genhtml_branch_coverage=1 00:10:35.609 --rc genhtml_function_coverage=1 00:10:35.609 --rc genhtml_legend=1 00:10:35.609 --rc geninfo_all_blocks=1 00:10:35.609 --rc geninfo_unexecuted_blocks=1 00:10:35.609 00:10:35.609 ' 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.609 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.871 11:44:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.013 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.013 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.013 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.013 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:44.014 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:44.014 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:44.014 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:44.014 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:44.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:10:44.014 00:10:44.014 --- 10.0.0.2 ping statistics --- 00:10:44.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.014 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:10:44.014 00:10:44.014 --- 10.0.0.1 ping statistics --- 00:10:44.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.014 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:10:44.014 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=894332 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 894332 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 894332 ']' 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.015 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.015 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.015 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:44.015 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:44.015 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.015 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:44.275 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:54.269 Initializing NVMe Controllers 00:10:54.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:54.269 Initialization complete. Launching workers. 00:10:54.269 ======================================================== 00:10:54.269 Latency(us) 00:10:54.269 Device Information : IOPS MiB/s Average min max 00:10:54.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18620.50 72.74 3437.46 626.17 15477.55 00:10:54.269 ======================================================== 00:10:54.269 Total : 18620.50 72.74 3437.46 626.17 15477.55 00:10:54.269 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.529 rmmod nvme_tcp 00:10:54.529 rmmod nvme_fabrics 00:10:54.529 rmmod nvme_keyring 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 894332 ']' 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 894332 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 894332 ']' 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 894332 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.529 11:44:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 894332 00:10:54.529 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:54.529 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:54.529 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 894332' 00:10:54.529 killing process with pid 894332 00:10:54.529 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 894332 00:10:54.529 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 894332 00:10:54.529 nvmf threads initialize successfully 00:10:54.529 bdev subsystem init successfully 00:10:54.529 created a nvmf target service 00:10:54.529 create targets's poll groups done 00:10:54.529 all subsystems of target started 00:10:54.529 nvmf target is running 00:10:54.529 all subsystems of target stopped 00:10:54.529 destroy targets's poll groups done 00:10:54.529 destroyed the nvmf target service 00:10:54.529 bdev subsystem finish successfully 00:10:54.529 nvmf threads destroy successfully 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.791 11:44:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.702 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:56.702 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:56.702 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.702 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.702 00:10:56.702 real 0m21.286s 00:10:56.702 user 0m46.502s 00:10:56.702 sys 0m6.861s 00:10:56.702 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.702 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.702 ************************************ 00:10:56.702 END TEST nvmf_example 00:10:56.702 ************************************ 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:56.963 ************************************ 00:10:56.963 START TEST nvmf_filesystem 00:10:56.963 ************************************ 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:56.963 * Looking for test storage... 00:10:56.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.963 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:56.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.964 --rc genhtml_branch_coverage=1 00:10:56.964 --rc genhtml_function_coverage=1 00:10:56.964 --rc genhtml_legend=1 00:10:56.964 --rc geninfo_all_blocks=1 00:10:56.964 --rc geninfo_unexecuted_blocks=1 00:10:56.964 00:10:56.964 ' 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:56.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.964 --rc genhtml_branch_coverage=1 00:10:56.964 --rc genhtml_function_coverage=1 00:10:56.964 --rc genhtml_legend=1 00:10:56.964 --rc geninfo_all_blocks=1 00:10:56.964 --rc geninfo_unexecuted_blocks=1 00:10:56.964 00:10:56.964 ' 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:56.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.964 --rc genhtml_branch_coverage=1 00:10:56.964 --rc genhtml_function_coverage=1 00:10:56.964 --rc genhtml_legend=1 00:10:56.964 --rc geninfo_all_blocks=1 00:10:56.964 --rc geninfo_unexecuted_blocks=1 00:10:56.964 00:10:56.964 ' 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:56.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.964 --rc genhtml_branch_coverage=1 00:10:56.964 --rc genhtml_function_coverage=1 00:10:56.964 --rc genhtml_legend=1 00:10:56.964 --rc geninfo_all_blocks=1 00:10:56.964 --rc geninfo_unexecuted_blocks=1 00:10:56.964 00:10:56.964 ' 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:56.964 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:57.229 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:57.229 #define SPDK_CONFIG_H 00:10:57.229 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:57.229 #define SPDK_CONFIG_APPS 1 00:10:57.230 #define SPDK_CONFIG_ARCH native 00:10:57.230 #undef SPDK_CONFIG_ASAN 00:10:57.230 #undef SPDK_CONFIG_AVAHI 00:10:57.230 #undef SPDK_CONFIG_CET 00:10:57.230 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:57.230 #define SPDK_CONFIG_COVERAGE 1 00:10:57.230 #define SPDK_CONFIG_CROSS_PREFIX 00:10:57.230 #undef SPDK_CONFIG_CRYPTO 00:10:57.230 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:57.230 #undef SPDK_CONFIG_CUSTOMOCF 00:10:57.230 #undef SPDK_CONFIG_DAOS 00:10:57.230 #define SPDK_CONFIG_DAOS_DIR 00:10:57.230 #define SPDK_CONFIG_DEBUG 1 00:10:57.230 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:57.230 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:57.230 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:57.230 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:57.230 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:57.230 #undef SPDK_CONFIG_DPDK_UADK 00:10:57.230 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:57.230 #define SPDK_CONFIG_EXAMPLES 1 00:10:57.230 #undef SPDK_CONFIG_FC 00:10:57.230 #define SPDK_CONFIG_FC_PATH 00:10:57.230 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:57.230 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:57.230 #define SPDK_CONFIG_FSDEV 1 00:10:57.230 #undef SPDK_CONFIG_FUSE 00:10:57.230 #undef SPDK_CONFIG_FUZZER 00:10:57.230 #define SPDK_CONFIG_FUZZER_LIB 00:10:57.230 #undef SPDK_CONFIG_GOLANG 00:10:57.230 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:57.230 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:57.230 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:57.230 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:57.230 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:57.230 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:57.230 #undef SPDK_CONFIG_HAVE_LZ4 00:10:57.230 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:57.230 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:57.230 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:57.230 #define SPDK_CONFIG_IDXD 1 00:10:57.230 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:57.230 #undef SPDK_CONFIG_IPSEC_MB 00:10:57.230 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:57.230 #define SPDK_CONFIG_ISAL 1 00:10:57.230 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:57.230 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:57.230 #define SPDK_CONFIG_LIBDIR 00:10:57.230 #undef SPDK_CONFIG_LTO 00:10:57.230 #define SPDK_CONFIG_MAX_LCORES 128 00:10:57.230 #define SPDK_CONFIG_NVME_CUSE 1 00:10:57.230 #undef SPDK_CONFIG_OCF 00:10:57.230 #define SPDK_CONFIG_OCF_PATH 00:10:57.230 #define SPDK_CONFIG_OPENSSL_PATH 00:10:57.230 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:57.230 #define SPDK_CONFIG_PGO_DIR 00:10:57.230 #undef SPDK_CONFIG_PGO_USE 00:10:57.230 #define SPDK_CONFIG_PREFIX /usr/local 00:10:57.230 #undef SPDK_CONFIG_RAID5F 00:10:57.230 #undef SPDK_CONFIG_RBD 00:10:57.230 #define SPDK_CONFIG_RDMA 1 00:10:57.230 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:57.230 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:57.230 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:57.230 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:57.230 #define SPDK_CONFIG_SHARED 1 00:10:57.230 #undef SPDK_CONFIG_SMA 00:10:57.230 #define SPDK_CONFIG_TESTS 1 00:10:57.230 #undef SPDK_CONFIG_TSAN 00:10:57.230 #define SPDK_CONFIG_UBLK 1 00:10:57.230 #define SPDK_CONFIG_UBSAN 1 00:10:57.230 #undef SPDK_CONFIG_UNIT_TESTS 00:10:57.230 #undef SPDK_CONFIG_URING 00:10:57.230 #define SPDK_CONFIG_URING_PATH 00:10:57.230 #undef SPDK_CONFIG_URING_ZNS 00:10:57.230 #undef SPDK_CONFIG_USDT 00:10:57.230 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:57.230 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:57.230 #define SPDK_CONFIG_VFIO_USER 1 00:10:57.230 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:57.230 #define SPDK_CONFIG_VHOST 1 00:10:57.230 #define SPDK_CONFIG_VIRTIO 1 00:10:57.230 #undef SPDK_CONFIG_VTUNE 00:10:57.230 #define SPDK_CONFIG_VTUNE_DIR 00:10:57.230 #define SPDK_CONFIG_WERROR 1 00:10:57.230 #define SPDK_CONFIG_WPDK_DIR 00:10:57.230 #undef SPDK_CONFIG_XNVME 00:10:57.230 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:57.230 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:57.231 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:57.232 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 897129 ]] 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 897129 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.I8cJ3Y 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.I8cJ3Y/tests/target /tmp/spdk.I8cJ3Y 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=607141888 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677287936 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=123528400896 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356562432 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5828161536 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668250112 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847959552 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871314944 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23355392 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64678076416 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678281216 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=204800 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935643136 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935655424 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:57.233 * Looking for test storage... 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:57.233 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=123528400896 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8042754048 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:57.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.234 --rc genhtml_branch_coverage=1 00:10:57.234 --rc genhtml_function_coverage=1 00:10:57.234 --rc genhtml_legend=1 00:10:57.234 --rc geninfo_all_blocks=1 00:10:57.234 --rc geninfo_unexecuted_blocks=1 00:10:57.234 00:10:57.234 ' 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:57.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.234 --rc genhtml_branch_coverage=1 00:10:57.234 --rc genhtml_function_coverage=1 00:10:57.234 --rc genhtml_legend=1 00:10:57.234 --rc geninfo_all_blocks=1 00:10:57.234 --rc geninfo_unexecuted_blocks=1 00:10:57.234 00:10:57.234 ' 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:57.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.234 --rc genhtml_branch_coverage=1 00:10:57.234 --rc genhtml_function_coverage=1 00:10:57.234 --rc genhtml_legend=1 00:10:57.234 --rc geninfo_all_blocks=1 00:10:57.234 --rc geninfo_unexecuted_blocks=1 00:10:57.234 00:10:57.234 ' 00:10:57.234 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:57.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.234 --rc genhtml_branch_coverage=1 00:10:57.234 --rc genhtml_function_coverage=1 00:10:57.234 --rc genhtml_legend=1 00:10:57.234 --rc geninfo_all_blocks=1 00:10:57.234 --rc geninfo_unexecuted_blocks=1 00:10:57.234 00:10:57.234 ' 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.496 11:44:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.634 11:44:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:05.634 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:05.634 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:05.634 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:05.634 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.634 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:11:05.635 00:11:05.635 --- 10.0.0.2 ping statistics --- 00:11:05.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.635 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:11:05.635 00:11:05.635 --- 10.0.0.1 ping statistics --- 00:11:05.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.635 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:05.635 ************************************ 00:11:05.635 START TEST nvmf_filesystem_no_in_capsule 00:11:05.635 ************************************ 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=900934 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 900934 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 900934 ']' 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.635 11:44:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.635 [2024-10-11 11:44:49.460811] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:05.635 [2024-10-11 11:44:49.460870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.635 [2024-10-11 11:44:49.551906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.635 [2024-10-11 11:44:49.604719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.635 [2024-10-11 11:44:49.604772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.635 [2024-10-11 11:44:49.604780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.635 [2024-10-11 11:44:49.604787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.635 [2024-10-11 11:44:49.604793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.635 [2024-10-11 11:44:49.607200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.635 [2024-10-11 11:44:49.607352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.635 [2024-10-11 11:44:49.607516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.635 [2024-10-11 11:44:49.607516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.896 [2024-10-11 11:44:50.331319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.896 Malloc1 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.896 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.897 [2024-10-11 11:44:50.485040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:05.897 { 00:11:05.897 "name": "Malloc1", 00:11:05.897 "aliases": [ 00:11:05.897 "5bab048b-7ab1-45df-8757-6b89b86d11d0" 00:11:05.897 ], 00:11:05.897 "product_name": "Malloc disk", 00:11:05.897 "block_size": 512, 00:11:05.897 "num_blocks": 1048576, 00:11:05.897 "uuid": "5bab048b-7ab1-45df-8757-6b89b86d11d0", 00:11:05.897 "assigned_rate_limits": { 00:11:05.897 "rw_ios_per_sec": 0, 00:11:05.897 "rw_mbytes_per_sec": 0, 00:11:05.897 "r_mbytes_per_sec": 0, 00:11:05.897 "w_mbytes_per_sec": 0 00:11:05.897 }, 00:11:05.897 "claimed": true, 00:11:05.897 "claim_type": "exclusive_write", 00:11:05.897 "zoned": false, 00:11:05.897 "supported_io_types": { 00:11:05.897 "read": true, 00:11:05.897 "write": true, 00:11:05.897 "unmap": true, 00:11:05.897 "flush": true, 00:11:05.897 "reset": true, 00:11:05.897 "nvme_admin": false, 00:11:05.897 "nvme_io": false, 00:11:05.897 "nvme_io_md": false, 00:11:05.897 "write_zeroes": true, 00:11:05.897 "zcopy": true, 00:11:05.897 "get_zone_info": false, 00:11:05.897 "zone_management": false, 00:11:05.897 "zone_append": false, 00:11:05.897 "compare": false, 00:11:05.897 "compare_and_write": false, 00:11:05.897 "abort": true, 00:11:05.897 "seek_hole": false, 00:11:05.897 "seek_data": false, 00:11:05.897 "copy": true, 00:11:05.897 "nvme_iov_md": false 00:11:05.897 }, 00:11:05.897 "memory_domains": [ 00:11:05.897 { 00:11:05.897 "dma_device_id": "system", 00:11:05.897 "dma_device_type": 1 00:11:05.897 }, 00:11:05.897 { 00:11:05.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.897 "dma_device_type": 2 00:11:05.897 } 00:11:05.897 ], 00:11:05.897 "driver_specific": {} 00:11:05.897 } 00:11:05.897 ]' 00:11:05.897 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:06.158 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:06.158 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:06.158 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:06.158 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:06.158 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:06.158 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:06.158 11:44:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:07.544 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:07.544 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:07.544 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:07.544 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:07.544 11:44:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:10.088 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:10.089 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:10.089 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:10.089 11:44:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.030 ************************************ 00:11:11.030 START TEST filesystem_ext4 00:11:11.030 ************************************ 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:11.030 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:11.030 mke2fs 1.47.0 (5-Feb-2023) 00:11:11.030 Discarding device blocks: 0/522240 done 00:11:11.030 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:11.030 Filesystem UUID: a8df29dd-1eef-4f66-b61f-d394d93fb48b 00:11:11.030 Superblock backups stored on blocks: 00:11:11.030 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:11.030 00:11:11.030 Allocating group tables: 0/64 done 00:11:11.030 Writing inode tables: 0/64 done 00:11:11.030 Creating journal (8192 blocks): done 00:11:11.291 Writing superblocks and filesystem accounting information: 0/64 done 00:11:11.291 00:11:11.291 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:11.291 11:44:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:17.871 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 900934 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:17.872 00:11:17.872 real 0m6.013s 00:11:17.872 user 0m0.022s 00:11:17.872 sys 0m0.057s 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:17.872 ************************************ 00:11:17.872 END TEST filesystem_ext4 00:11:17.872 ************************************ 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.872 ************************************ 00:11:17.872 START TEST filesystem_btrfs 00:11:17.872 ************************************ 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:17.872 btrfs-progs v6.8.1 00:11:17.872 See https://btrfs.readthedocs.io for more information. 00:11:17.872 00:11:17.872 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:17.872 NOTE: several default settings have changed in version 5.15, please make sure 00:11:17.872 this does not affect your deployments: 00:11:17.872 - DUP for metadata (-m dup) 00:11:17.872 - enabled no-holes (-O no-holes) 00:11:17.872 - enabled free-space-tree (-R free-space-tree) 00:11:17.872 00:11:17.872 Label: (null) 00:11:17.872 UUID: ba3cc4cf-1a36-4db4-958b-c350f1f76b81 00:11:17.872 Node size: 16384 00:11:17.872 Sector size: 4096 (CPU page size: 4096) 00:11:17.872 Filesystem size: 510.00MiB 00:11:17.872 Block group profiles: 00:11:17.872 Data: single 8.00MiB 00:11:17.872 Metadata: DUP 32.00MiB 00:11:17.872 System: DUP 8.00MiB 00:11:17.872 SSD detected: yes 00:11:17.872 Zoned device: no 00:11:17.872 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:17.872 Checksum: crc32c 00:11:17.872 Number of devices: 1 00:11:17.872 Devices: 00:11:17.872 ID SIZE PATH 00:11:17.872 1 510.00MiB /dev/nvme0n1p1 00:11:17.872 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:17.872 11:45:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 900934 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:17.872 00:11:17.872 real 0m0.707s 00:11:17.872 user 0m0.021s 00:11:17.872 sys 0m0.068s 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:17.872 ************************************ 00:11:17.872 END TEST filesystem_btrfs 00:11:17.872 ************************************ 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.872 ************************************ 00:11:17.872 START TEST filesystem_xfs 00:11:17.872 ************************************ 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:17.872 11:45:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:17.872 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:17.872 = sectsz=512 attr=2, projid32bit=1 00:11:17.872 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:17.872 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:17.872 data = bsize=4096 blocks=130560, imaxpct=25 00:11:17.872 = sunit=0 swidth=0 blks 00:11:17.872 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:17.872 log =internal log bsize=4096 blocks=16384, version=2 00:11:17.872 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:17.872 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:18.813 Discarding blocks...Done. 00:11:18.813 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:18.813 11:45:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.724 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.724 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:20.724 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.724 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:20.724 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:20.724 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.984 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 900934 00:11:20.984 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.984 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.984 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.984 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.984 00:11:20.984 real 0m3.156s 00:11:20.984 user 0m0.026s 00:11:20.984 sys 0m0.054s 00:11:20.984 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.984 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:20.984 ************************************ 00:11:20.984 END TEST filesystem_xfs 00:11:20.984 ************************************ 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 900934 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 900934 ']' 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 900934 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.985 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 900934 00:11:21.246 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.246 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.246 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 900934' 00:11:21.246 killing process with pid 900934 00:11:21.246 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 900934 00:11:21.246 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 900934 00:11:21.246 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:21.246 00:11:21.246 real 0m16.452s 00:11:21.246 user 1m4.904s 00:11:21.246 sys 0m1.261s 00:11:21.246 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.246 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.246 ************************************ 00:11:21.246 END TEST nvmf_filesystem_no_in_capsule 00:11:21.246 ************************************ 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.506 ************************************ 00:11:21.506 START TEST nvmf_filesystem_in_capsule 00:11:21.506 ************************************ 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=904422 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 904422 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 904422 ']' 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.506 11:45:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.506 [2024-10-11 11:45:05.991040] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:21.506 [2024-10-11 11:45:05.991092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.506 [2024-10-11 11:45:06.073056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.506 [2024-10-11 11:45:06.108914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.506 [2024-10-11 11:45:06.108947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.506 [2024-10-11 11:45:06.108953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.506 [2024-10-11 11:45:06.108957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.506 [2024-10-11 11:45:06.108962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.506 [2024-10-11 11:45:06.110222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.506 [2024-10-11 11:45:06.110381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.506 [2024-10-11 11:45:06.110903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.506 [2024-10-11 11:45:06.110905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.448 [2024-10-11 11:45:06.841946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.448 Malloc1 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.448 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.449 [2024-10-11 11:45:06.962978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:22.449 { 00:11:22.449 "name": "Malloc1", 00:11:22.449 "aliases": [ 00:11:22.449 "a1781965-1c0f-4bc3-91f8-51836d839ffd" 00:11:22.449 ], 00:11:22.449 "product_name": "Malloc disk", 00:11:22.449 "block_size": 512, 00:11:22.449 "num_blocks": 1048576, 00:11:22.449 "uuid": "a1781965-1c0f-4bc3-91f8-51836d839ffd", 00:11:22.449 "assigned_rate_limits": { 00:11:22.449 "rw_ios_per_sec": 0, 00:11:22.449 "rw_mbytes_per_sec": 0, 00:11:22.449 "r_mbytes_per_sec": 0, 00:11:22.449 "w_mbytes_per_sec": 0 00:11:22.449 }, 00:11:22.449 "claimed": true, 00:11:22.449 "claim_type": "exclusive_write", 00:11:22.449 "zoned": false, 00:11:22.449 "supported_io_types": { 00:11:22.449 "read": true, 00:11:22.449 "write": true, 00:11:22.449 "unmap": true, 00:11:22.449 "flush": true, 00:11:22.449 "reset": true, 00:11:22.449 "nvme_admin": false, 00:11:22.449 "nvme_io": false, 00:11:22.449 "nvme_io_md": false, 00:11:22.449 "write_zeroes": true, 00:11:22.449 "zcopy": true, 00:11:22.449 "get_zone_info": false, 00:11:22.449 "zone_management": false, 00:11:22.449 "zone_append": false, 00:11:22.449 "compare": false, 00:11:22.449 "compare_and_write": false, 00:11:22.449 "abort": true, 00:11:22.449 "seek_hole": false, 00:11:22.449 "seek_data": false, 00:11:22.449 "copy": true, 00:11:22.449 "nvme_iov_md": false 00:11:22.449 }, 00:11:22.449 "memory_domains": [ 00:11:22.449 { 00:11:22.449 "dma_device_id": "system", 00:11:22.449 "dma_device_type": 1 00:11:22.449 }, 00:11:22.449 { 00:11:22.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.449 "dma_device_type": 2 00:11:22.449 } 00:11:22.449 ], 00:11:22.449 "driver_specific": {} 00:11:22.449 } 00:11:22.449 ]' 00:11:22.449 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:22.449 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:22.449 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:22.449 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:22.449 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:22.449 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:22.449 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:22.449 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.361 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.361 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.361 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.361 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:24.361 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:26.275 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:26.535 11:45:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:26.535 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:27.475 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:27.475 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:27.475 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:27.475 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.475 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.736 ************************************ 00:11:27.736 START TEST filesystem_in_capsule_ext4 00:11:27.736 ************************************ 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:27.736 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:27.736 mke2fs 1.47.0 (5-Feb-2023) 00:11:27.736 Discarding device blocks: 0/522240 done 00:11:27.736 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:27.736 Filesystem UUID: 77f4e0fb-5098-4394-a3ea-cabee7028149 00:11:27.736 Superblock backups stored on blocks: 00:11:27.736 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:27.736 00:11:27.736 Allocating group tables: 0/64 done 00:11:27.736 Writing inode tables: 0/64 done 00:11:29.119 Creating journal (8192 blocks): done 00:11:30.762 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:30.762 00:11:30.762 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:30.762 11:45:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 904422 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.347 00:11:37.347 real 0m9.437s 00:11:37.347 user 0m0.024s 00:11:37.347 sys 0m0.062s 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:37.347 ************************************ 00:11:37.347 END TEST filesystem_in_capsule_ext4 00:11:37.347 ************************************ 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.347 ************************************ 00:11:37.347 START TEST filesystem_in_capsule_btrfs 00:11:37.347 ************************************ 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:37.347 11:45:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:37.608 btrfs-progs v6.8.1 00:11:37.608 See https://btrfs.readthedocs.io for more information. 00:11:37.608 00:11:37.608 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:37.608 NOTE: several default settings have changed in version 5.15, please make sure 00:11:37.608 this does not affect your deployments: 00:11:37.608 - DUP for metadata (-m dup) 00:11:37.608 - enabled no-holes (-O no-holes) 00:11:37.608 - enabled free-space-tree (-R free-space-tree) 00:11:37.608 00:11:37.608 Label: (null) 00:11:37.608 UUID: 48bde771-32b6-4042-a2b1-8fa24190c1cf 00:11:37.608 Node size: 16384 00:11:37.608 Sector size: 4096 (CPU page size: 4096) 00:11:37.608 Filesystem size: 510.00MiB 00:11:37.608 Block group profiles: 00:11:37.608 Data: single 8.00MiB 00:11:37.608 Metadata: DUP 32.00MiB 00:11:37.608 System: DUP 8.00MiB 00:11:37.608 SSD detected: yes 00:11:37.608 Zoned device: no 00:11:37.608 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:37.608 Checksum: crc32c 00:11:37.608 Number of devices: 1 00:11:37.608 Devices: 00:11:37.608 ID SIZE PATH 00:11:37.608 1 510.00MiB /dev/nvme0n1p1 00:11:37.608 00:11:37.608 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:37.608 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.867 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.867 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:37.867 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.867 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:37.867 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:37.867 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 904422 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.868 00:11:37.868 real 0m0.704s 00:11:37.868 user 0m0.028s 00:11:37.868 sys 0m0.059s 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:37.868 ************************************ 00:11:37.868 END TEST filesystem_in_capsule_btrfs 00:11:37.868 ************************************ 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.868 ************************************ 00:11:37.868 START TEST filesystem_in_capsule_xfs 00:11:37.868 ************************************ 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:37.868 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:37.868 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:37.868 = sectsz=512 attr=2, projid32bit=1 00:11:37.868 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:37.868 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:37.868 data = bsize=4096 blocks=130560, imaxpct=25 00:11:37.868 = sunit=0 swidth=0 blks 00:11:37.868 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:37.868 log =internal log bsize=4096 blocks=16384, version=2 00:11:37.868 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:37.868 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:38.810 Discarding blocks...Done. 00:11:38.810 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:38.810 11:45:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 904422 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:40.722 00:11:40.722 real 0m2.816s 00:11:40.722 user 0m0.024s 00:11:40.722 sys 0m0.057s 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:40.722 ************************************ 00:11:40.722 END TEST filesystem_in_capsule_xfs 00:11:40.722 ************************************ 00:11:40.722 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:40.723 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:40.723 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 904422 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 904422 ']' 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 904422 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 904422 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 904422' 00:11:40.983 killing process with pid 904422 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 904422 00:11:40.983 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 904422 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:41.244 00:11:41.244 real 0m19.783s 00:11:41.244 user 1m18.264s 00:11:41.244 sys 0m1.264s 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.244 ************************************ 00:11:41.244 END TEST nvmf_filesystem_in_capsule 00:11:41.244 ************************************ 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.244 rmmod nvme_tcp 00:11:41.244 rmmod nvme_fabrics 00:11:41.244 rmmod nvme_keyring 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.244 11:45:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.789 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:43.789 00:11:43.789 real 0m46.508s 00:11:43.789 user 2m25.558s 00:11:43.789 sys 0m8.386s 00:11:43.789 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.789 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.789 ************************************ 00:11:43.789 END TEST nvmf_filesystem 00:11:43.789 ************************************ 00:11:43.789 11:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:43.789 11:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:43.789 11:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.789 11:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.789 ************************************ 00:11:43.789 START TEST nvmf_target_discovery 00:11:43.789 ************************************ 00:11:43.789 11:45:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:43.789 * Looking for test storage... 00:11:43.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.789 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:43.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.790 --rc genhtml_branch_coverage=1 00:11:43.790 --rc genhtml_function_coverage=1 00:11:43.790 --rc genhtml_legend=1 00:11:43.790 --rc geninfo_all_blocks=1 00:11:43.790 --rc geninfo_unexecuted_blocks=1 00:11:43.790 00:11:43.790 ' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:43.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.790 --rc genhtml_branch_coverage=1 00:11:43.790 --rc genhtml_function_coverage=1 00:11:43.790 --rc genhtml_legend=1 00:11:43.790 --rc geninfo_all_blocks=1 00:11:43.790 --rc geninfo_unexecuted_blocks=1 00:11:43.790 00:11:43.790 ' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:43.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.790 --rc genhtml_branch_coverage=1 00:11:43.790 --rc genhtml_function_coverage=1 00:11:43.790 --rc genhtml_legend=1 00:11:43.790 --rc geninfo_all_blocks=1 00:11:43.790 --rc geninfo_unexecuted_blocks=1 00:11:43.790 00:11:43.790 ' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:43.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.790 --rc genhtml_branch_coverage=1 00:11:43.790 --rc genhtml_function_coverage=1 00:11:43.790 --rc genhtml_legend=1 00:11:43.790 --rc geninfo_all_blocks=1 00:11:43.790 --rc geninfo_unexecuted_blocks=1 00:11:43.790 00:11:43.790 ' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.790 11:45:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:51.934 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:51.934 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:51.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:51.934 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.934 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:11:51.935 00:11:51.935 --- 10.0.0.2 ping statistics --- 00:11:51.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.935 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:11:51.935 00:11:51.935 --- 10.0.0.1 ping statistics --- 00:11:51.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.935 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=913252 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 913252 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 913252 ']' 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.935 11:45:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 [2024-10-11 11:45:35.734172] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:11:51.935 [2024-10-11 11:45:35.734235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.935 [2024-10-11 11:45:35.824005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.935 [2024-10-11 11:45:35.876909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.935 [2024-10-11 11:45:35.876969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.935 [2024-10-11 11:45:35.876978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.935 [2024-10-11 11:45:35.876984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.935 [2024-10-11 11:45:35.876991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.935 [2024-10-11 11:45:35.879075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.935 [2024-10-11 11:45:35.879239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.935 [2024-10-11 11:45:35.879400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.935 [2024-10-11 11:45:35.879400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.935 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.935 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:51.935 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:51.935 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.935 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 [2024-10-11 11:45:36.614706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 Null1 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 [2024-10-11 11:45:36.675239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 Null2 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 Null3 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.197 Null4 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:52.197 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.198 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.459 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.459 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:52.459 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.459 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.459 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.459 11:45:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:52.459 00:11:52.459 Discovery Log Number of Records 6, Generation counter 6 00:11:52.459 =====Discovery Log Entry 0====== 00:11:52.459 trtype: tcp 00:11:52.459 adrfam: ipv4 00:11:52.459 subtype: current discovery subsystem 00:11:52.459 treq: not required 00:11:52.459 portid: 0 00:11:52.459 trsvcid: 4420 00:11:52.459 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:52.459 traddr: 10.0.0.2 00:11:52.459 eflags: explicit discovery connections, duplicate discovery information 00:11:52.459 sectype: none 00:11:52.459 =====Discovery Log Entry 1====== 00:11:52.459 trtype: tcp 00:11:52.459 adrfam: ipv4 00:11:52.459 subtype: nvme subsystem 00:11:52.459 treq: not required 00:11:52.459 portid: 0 00:11:52.459 trsvcid: 4420 00:11:52.459 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:52.459 traddr: 10.0.0.2 00:11:52.459 eflags: none 00:11:52.459 sectype: none 00:11:52.459 =====Discovery Log Entry 2====== 00:11:52.459 trtype: tcp 00:11:52.459 adrfam: ipv4 00:11:52.459 subtype: nvme subsystem 00:11:52.459 treq: not required 00:11:52.459 portid: 0 00:11:52.459 trsvcid: 4420 00:11:52.459 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:52.459 traddr: 10.0.0.2 00:11:52.459 eflags: none 00:11:52.459 sectype: none 00:11:52.459 =====Discovery Log Entry 3====== 00:11:52.459 trtype: tcp 00:11:52.459 adrfam: ipv4 00:11:52.459 subtype: nvme subsystem 00:11:52.459 treq: not required 00:11:52.459 portid: 0 00:11:52.459 trsvcid: 4420 00:11:52.459 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:52.459 traddr: 10.0.0.2 00:11:52.459 eflags: none 00:11:52.459 sectype: none 00:11:52.459 =====Discovery Log Entry 4====== 00:11:52.459 trtype: tcp 00:11:52.459 adrfam: ipv4 00:11:52.459 subtype: nvme subsystem 00:11:52.459 treq: not required 00:11:52.459 portid: 0 00:11:52.459 trsvcid: 4420 00:11:52.459 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:52.459 traddr: 10.0.0.2 00:11:52.459 eflags: none 00:11:52.459 sectype: none 00:11:52.459 =====Discovery Log Entry 5====== 00:11:52.459 trtype: tcp 00:11:52.459 adrfam: ipv4 00:11:52.459 subtype: discovery subsystem referral 00:11:52.459 treq: not required 00:11:52.459 portid: 0 00:11:52.459 trsvcid: 4430 00:11:52.459 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:52.459 traddr: 10.0.0.2 00:11:52.459 eflags: none 00:11:52.459 sectype: none 00:11:52.459 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:52.459 Perform nvmf subsystem discovery via RPC 00:11:52.459 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:52.459 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.459 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.459 [ 00:11:52.459 { 00:11:52.459 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:52.459 "subtype": "Discovery", 00:11:52.459 "listen_addresses": [ 00:11:52.459 { 00:11:52.459 "trtype": "TCP", 00:11:52.460 "adrfam": "IPv4", 00:11:52.460 "traddr": "10.0.0.2", 00:11:52.460 "trsvcid": "4420" 00:11:52.460 } 00:11:52.460 ], 00:11:52.460 "allow_any_host": true, 00:11:52.460 "hosts": [] 00:11:52.460 }, 00:11:52.460 { 00:11:52.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.460 "subtype": "NVMe", 00:11:52.460 "listen_addresses": [ 00:11:52.460 { 00:11:52.460 "trtype": "TCP", 00:11:52.460 "adrfam": "IPv4", 00:11:52.460 "traddr": "10.0.0.2", 00:11:52.460 "trsvcid": "4420" 00:11:52.460 } 00:11:52.460 ], 00:11:52.460 "allow_any_host": true, 00:11:52.460 "hosts": [], 00:11:52.460 "serial_number": "SPDK00000000000001", 00:11:52.460 "model_number": "SPDK bdev Controller", 00:11:52.460 "max_namespaces": 32, 00:11:52.460 "min_cntlid": 1, 00:11:52.460 "max_cntlid": 65519, 00:11:52.460 "namespaces": [ 00:11:52.460 { 00:11:52.460 "nsid": 1, 00:11:52.460 "bdev_name": "Null1", 00:11:52.460 "name": "Null1", 00:11:52.460 "nguid": "CCAD3C6F5D0D453D977CCAB85D6E52E1", 00:11:52.460 "uuid": "ccad3c6f-5d0d-453d-977c-cab85d6e52e1" 00:11:52.460 } 00:11:52.460 ] 00:11:52.460 }, 00:11:52.460 { 00:11:52.460 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:52.460 "subtype": "NVMe", 00:11:52.460 "listen_addresses": [ 00:11:52.460 { 00:11:52.460 "trtype": "TCP", 00:11:52.460 "adrfam": "IPv4", 00:11:52.460 "traddr": "10.0.0.2", 00:11:52.460 "trsvcid": "4420" 00:11:52.460 } 00:11:52.460 ], 00:11:52.460 "allow_any_host": true, 00:11:52.460 "hosts": [], 00:11:52.460 "serial_number": "SPDK00000000000002", 00:11:52.460 "model_number": "SPDK bdev Controller", 00:11:52.460 "max_namespaces": 32, 00:11:52.460 "min_cntlid": 1, 00:11:52.460 "max_cntlid": 65519, 00:11:52.460 "namespaces": [ 00:11:52.460 { 00:11:52.460 "nsid": 1, 00:11:52.460 "bdev_name": "Null2", 00:11:52.460 "name": "Null2", 00:11:52.460 "nguid": "A986A6802C934D99B10FD93531C9AAC1", 00:11:52.460 "uuid": "a986a680-2c93-4d99-b10f-d93531c9aac1" 00:11:52.460 } 00:11:52.460 ] 00:11:52.460 }, 00:11:52.460 { 00:11:52.460 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:52.460 "subtype": "NVMe", 00:11:52.460 "listen_addresses": [ 00:11:52.460 { 00:11:52.460 "trtype": "TCP", 00:11:52.460 "adrfam": "IPv4", 00:11:52.460 "traddr": "10.0.0.2", 00:11:52.460 "trsvcid": "4420" 00:11:52.460 } 00:11:52.460 ], 00:11:52.460 "allow_any_host": true, 00:11:52.460 "hosts": [], 00:11:52.460 "serial_number": "SPDK00000000000003", 00:11:52.460 "model_number": "SPDK bdev Controller", 00:11:52.460 "max_namespaces": 32, 00:11:52.460 "min_cntlid": 1, 00:11:52.460 "max_cntlid": 65519, 00:11:52.460 "namespaces": [ 00:11:52.460 { 00:11:52.460 "nsid": 1, 00:11:52.460 "bdev_name": "Null3", 00:11:52.460 "name": "Null3", 00:11:52.460 "nguid": "7BBC9FBD1B7A424495DE7E78888C71D3", 00:11:52.460 "uuid": "7bbc9fbd-1b7a-4244-95de-7e78888c71d3" 00:11:52.460 } 00:11:52.460 ] 00:11:52.460 }, 00:11:52.460 { 00:11:52.460 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:52.460 "subtype": "NVMe", 00:11:52.460 "listen_addresses": [ 00:11:52.460 { 00:11:52.460 "trtype": "TCP", 00:11:52.460 "adrfam": "IPv4", 00:11:52.460 "traddr": "10.0.0.2", 00:11:52.460 "trsvcid": "4420" 00:11:52.460 } 00:11:52.460 ], 00:11:52.460 "allow_any_host": true, 00:11:52.460 "hosts": [], 00:11:52.460 "serial_number": "SPDK00000000000004", 00:11:52.460 "model_number": "SPDK bdev Controller", 00:11:52.460 "max_namespaces": 32, 00:11:52.460 "min_cntlid": 1, 00:11:52.460 "max_cntlid": 65519, 00:11:52.460 "namespaces": [ 00:11:52.460 { 00:11:52.460 "nsid": 1, 00:11:52.460 "bdev_name": "Null4", 00:11:52.460 "name": "Null4", 00:11:52.460 "nguid": "1E6D81DA49574C57AE0C0748426096E6", 00:11:52.460 "uuid": "1e6d81da-4957-4c57-ae0c-0748426096e6" 00:11:52.460 } 00:11:52.460 ] 00:11:52.460 } 00:11:52.460 ] 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.460 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.721 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.722 rmmod nvme_tcp 00:11:52.722 rmmod nvme_fabrics 00:11:52.722 rmmod nvme_keyring 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 913252 ']' 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 913252 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 913252 ']' 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 913252 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.722 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 913252 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 913252' 00:11:52.984 killing process with pid 913252 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 913252 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 913252 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.984 11:45:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.530 00:11:55.530 real 0m11.640s 00:11:55.530 user 0m8.953s 00:11:55.530 sys 0m6.033s 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.530 ************************************ 00:11:55.530 END TEST nvmf_target_discovery 00:11:55.530 ************************************ 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.530 ************************************ 00:11:55.530 START TEST nvmf_referrals 00:11:55.530 ************************************ 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:55.530 * Looking for test storage... 00:11:55.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:55.530 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:55.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.531 --rc genhtml_branch_coverage=1 00:11:55.531 --rc genhtml_function_coverage=1 00:11:55.531 --rc genhtml_legend=1 00:11:55.531 --rc geninfo_all_blocks=1 00:11:55.531 --rc geninfo_unexecuted_blocks=1 00:11:55.531 00:11:55.531 ' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:55.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.531 --rc genhtml_branch_coverage=1 00:11:55.531 --rc genhtml_function_coverage=1 00:11:55.531 --rc genhtml_legend=1 00:11:55.531 --rc geninfo_all_blocks=1 00:11:55.531 --rc geninfo_unexecuted_blocks=1 00:11:55.531 00:11:55.531 ' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:55.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.531 --rc genhtml_branch_coverage=1 00:11:55.531 --rc genhtml_function_coverage=1 00:11:55.531 --rc genhtml_legend=1 00:11:55.531 --rc geninfo_all_blocks=1 00:11:55.531 --rc geninfo_unexecuted_blocks=1 00:11:55.531 00:11:55.531 ' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:55.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.531 --rc genhtml_branch_coverage=1 00:11:55.531 --rc genhtml_function_coverage=1 00:11:55.531 --rc genhtml_legend=1 00:11:55.531 --rc geninfo_all_blocks=1 00:11:55.531 --rc geninfo_unexecuted_blocks=1 00:11:55.531 00:11:55.531 ' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.531 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.674 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:03.675 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:03.675 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:03.675 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:03.675 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:12:03.675 00:12:03.675 --- 10.0.0.2 ping statistics --- 00:12:03.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.675 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:12:03.675 00:12:03.675 --- 10.0.0.1 ping statistics --- 00:12:03.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.675 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=917714 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 917714 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 917714 ']' 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.675 11:45:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.675 [2024-10-11 11:45:47.486868] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:03.675 [2024-10-11 11:45:47.486933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.675 [2024-10-11 11:45:47.573106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.675 [2024-10-11 11:45:47.626836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.675 [2024-10-11 11:45:47.626890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.675 [2024-10-11 11:45:47.626899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.675 [2024-10-11 11:45:47.626906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.675 [2024-10-11 11:45:47.626912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.676 [2024-10-11 11:45:47.628990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.676 [2024-10-11 11:45:47.629143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.676 [2024-10-11 11:45:47.629304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.676 [2024-10-11 11:45:47.629305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.937 [2024-10-11 11:45:48.368939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.937 [2024-10-11 11:45:48.385283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:03.937 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.198 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:04.198 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:04.198 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.199 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.460 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.722 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:04.982 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:05.242 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:05.242 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:05.242 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:05.242 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:05.242 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:05.242 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.242 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:05.503 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:05.503 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:05.503 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:05.503 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:05.503 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.503 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.503 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.764 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.764 rmmod nvme_tcp 00:12:05.764 rmmod nvme_fabrics 00:12:05.764 rmmod nvme_keyring 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 917714 ']' 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 917714 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 917714 ']' 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 917714 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 917714 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 917714' 00:12:06.025 killing process with pid 917714 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 917714 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 917714 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.025 11:45:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.569 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.569 00:12:08.569 real 0m12.997s 00:12:08.569 user 0m14.979s 00:12:08.569 sys 0m6.360s 00:12:08.569 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.569 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.569 ************************************ 00:12:08.570 END TEST nvmf_referrals 00:12:08.570 ************************************ 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.570 ************************************ 00:12:08.570 START TEST nvmf_connect_disconnect 00:12:08.570 ************************************ 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:08.570 * Looking for test storage... 00:12:08.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:08.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.570 --rc genhtml_branch_coverage=1 00:12:08.570 --rc genhtml_function_coverage=1 00:12:08.570 --rc genhtml_legend=1 00:12:08.570 --rc geninfo_all_blocks=1 00:12:08.570 --rc geninfo_unexecuted_blocks=1 00:12:08.570 00:12:08.570 ' 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:08.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.570 --rc genhtml_branch_coverage=1 00:12:08.570 --rc genhtml_function_coverage=1 00:12:08.570 --rc genhtml_legend=1 00:12:08.570 --rc geninfo_all_blocks=1 00:12:08.570 --rc geninfo_unexecuted_blocks=1 00:12:08.570 00:12:08.570 ' 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:08.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.570 --rc genhtml_branch_coverage=1 00:12:08.570 --rc genhtml_function_coverage=1 00:12:08.570 --rc genhtml_legend=1 00:12:08.570 --rc geninfo_all_blocks=1 00:12:08.570 --rc geninfo_unexecuted_blocks=1 00:12:08.570 00:12:08.570 ' 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:08.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.570 --rc genhtml_branch_coverage=1 00:12:08.570 --rc genhtml_function_coverage=1 00:12:08.570 --rc genhtml_legend=1 00:12:08.570 --rc geninfo_all_blocks=1 00:12:08.570 --rc geninfo_unexecuted_blocks=1 00:12:08.570 00:12:08.570 ' 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.570 11:45:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.570 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.571 11:45:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:16.708 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:16.708 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.708 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:16.709 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:16.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:16.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:12:16.709 00:12:16.709 --- 10.0.0.2 ping statistics --- 00:12:16.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.709 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:12:16.709 00:12:16.709 --- 10.0.0.1 ping statistics --- 00:12:16.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.709 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=922668 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 922668 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 922668 ']' 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.709 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.709 [2024-10-11 11:46:00.529707] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:16.709 [2024-10-11 11:46:00.529773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.709 [2024-10-11 11:46:00.618870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.709 [2024-10-11 11:46:00.672206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.709 [2024-10-11 11:46:00.672266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.709 [2024-10-11 11:46:00.672275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.709 [2024-10-11 11:46:00.672283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.709 [2024-10-11 11:46:00.672289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.709 [2024-10-11 11:46:00.674717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.709 [2024-10-11 11:46:00.674917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.709 [2024-10-11 11:46:00.675050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.709 [2024-10-11 11:46:00.675051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.971 [2024-10-11 11:46:01.407318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:16.971 [2024-10-11 11:46:01.485346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:16.971 11:46:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:21.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.376 rmmod nvme_tcp 00:12:35.376 rmmod nvme_fabrics 00:12:35.376 rmmod nvme_keyring 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 922668 ']' 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 922668 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 922668 ']' 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 922668 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 922668 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 922668' 00:12:35.376 killing process with pid 922668 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 922668 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 922668 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.376 11:46:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.447 00:12:37.447 real 0m28.965s 00:12:37.447 user 1m17.778s 00:12:37.447 sys 0m6.891s 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.447 ************************************ 00:12:37.447 END TEST nvmf_connect_disconnect 00:12:37.447 ************************************ 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.447 ************************************ 00:12:37.447 START TEST nvmf_multitarget 00:12:37.447 ************************************ 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:37.447 * Looking for test storage... 00:12:37.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.447 11:46:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:37.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.447 --rc genhtml_branch_coverage=1 00:12:37.447 --rc genhtml_function_coverage=1 00:12:37.447 --rc genhtml_legend=1 00:12:37.447 --rc geninfo_all_blocks=1 00:12:37.447 --rc geninfo_unexecuted_blocks=1 00:12:37.447 00:12:37.447 ' 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:37.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.447 --rc genhtml_branch_coverage=1 00:12:37.447 --rc genhtml_function_coverage=1 00:12:37.447 --rc genhtml_legend=1 00:12:37.447 --rc geninfo_all_blocks=1 00:12:37.447 --rc geninfo_unexecuted_blocks=1 00:12:37.447 00:12:37.447 ' 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:37.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.447 --rc genhtml_branch_coverage=1 00:12:37.447 --rc genhtml_function_coverage=1 00:12:37.447 --rc genhtml_legend=1 00:12:37.447 --rc geninfo_all_blocks=1 00:12:37.447 --rc geninfo_unexecuted_blocks=1 00:12:37.447 00:12:37.447 ' 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:37.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.447 --rc genhtml_branch_coverage=1 00:12:37.447 --rc genhtml_function_coverage=1 00:12:37.447 --rc genhtml_legend=1 00:12:37.447 --rc geninfo_all_blocks=1 00:12:37.447 --rc geninfo_unexecuted_blocks=1 00:12:37.447 00:12:37.447 ' 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.447 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.448 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:45.592 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.592 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:45.593 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:45.593 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:45.593 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:45.593 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:12:45.593 00:12:45.593 --- 10.0.0.2 ping statistics --- 00:12:45.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.593 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:12:45.593 00:12:45.593 --- 10.0.0.1 ping statistics --- 00:12:45.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.593 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.593 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=930621 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 930621 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 930621 ']' 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:45.594 [2024-10-11 11:46:29.588750] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:45.594 [2024-10-11 11:46:29.588814] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.594 [2024-10-11 11:46:29.655030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.594 [2024-10-11 11:46:29.703325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.594 [2024-10-11 11:46:29.703377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.594 [2024-10-11 11:46:29.703384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.594 [2024-10-11 11:46:29.703389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.594 [2024-10-11 11:46:29.703394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.594 [2024-10-11 11:46:29.705137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.594 [2024-10-11 11:46:29.705305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.594 [2024-10-11 11:46:29.705441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.594 [2024-10-11 11:46:29.705442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:45.594 11:46:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:45.594 "nvmf_tgt_1" 00:12:45.594 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:45.594 "nvmf_tgt_2" 00:12:45.594 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:45.594 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:45.855 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:45.855 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:45.855 true 00:12:45.855 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:46.116 true 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:46.116 rmmod nvme_tcp 00:12:46.116 rmmod nvme_fabrics 00:12:46.116 rmmod nvme_keyring 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 930621 ']' 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 930621 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 930621 ']' 00:12:46.116 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 930621 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930621 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930621' 00:12:46.376 killing process with pid 930621 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 930621 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 930621 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.376 11:46:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:48.922 00:12:48.922 real 0m11.226s 00:12:48.922 user 0m8.093s 00:12:48.922 sys 0m5.980s 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.922 ************************************ 00:12:48.922 END TEST nvmf_multitarget 00:12:48.922 ************************************ 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:48.922 ************************************ 00:12:48.922 START TEST nvmf_rpc 00:12:48.922 ************************************ 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:48.922 * Looking for test storage... 00:12:48.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.922 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.923 --rc genhtml_branch_coverage=1 00:12:48.923 --rc genhtml_function_coverage=1 00:12:48.923 --rc genhtml_legend=1 00:12:48.923 --rc geninfo_all_blocks=1 00:12:48.923 --rc geninfo_unexecuted_blocks=1 00:12:48.923 00:12:48.923 ' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.923 --rc genhtml_branch_coverage=1 00:12:48.923 --rc genhtml_function_coverage=1 00:12:48.923 --rc genhtml_legend=1 00:12:48.923 --rc geninfo_all_blocks=1 00:12:48.923 --rc geninfo_unexecuted_blocks=1 00:12:48.923 00:12:48.923 ' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.923 --rc genhtml_branch_coverage=1 00:12:48.923 --rc genhtml_function_coverage=1 00:12:48.923 --rc genhtml_legend=1 00:12:48.923 --rc geninfo_all_blocks=1 00:12:48.923 --rc geninfo_unexecuted_blocks=1 00:12:48.923 00:12:48.923 ' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.923 --rc genhtml_branch_coverage=1 00:12:48.923 --rc genhtml_function_coverage=1 00:12:48.923 --rc genhtml_legend=1 00:12:48.923 --rc geninfo_all_blocks=1 00:12:48.923 --rc geninfo_unexecuted_blocks=1 00:12:48.923 00:12:48.923 ' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.923 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:57.063 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:57.063 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:57.063 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:57.064 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:57.064 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:57.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:12:57.064 00:12:57.064 --- 10.0.0.2 ping statistics --- 00:12:57.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.064 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:12:57.064 00:12:57.064 --- 10.0.0.1 ping statistics --- 00:12:57.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.064 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=935127 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 935127 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 935127 ']' 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:57.064 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.064 [2024-10-11 11:46:40.960164] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:12:57.064 [2024-10-11 11:46:40.960226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.064 [2024-10-11 11:46:41.050753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.064 [2024-10-11 11:46:41.104323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.064 [2024-10-11 11:46:41.104386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.064 [2024-10-11 11:46:41.104395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.064 [2024-10-11 11:46:41.104402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.064 [2024-10-11 11:46:41.104408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.064 [2024-10-11 11:46:41.106707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.064 [2024-10-11 11:46:41.106825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.064 [2024-10-11 11:46:41.106983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.064 [2024-10-11 11:46:41.106985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:57.326 "tick_rate": 2400000000, 00:12:57.326 "poll_groups": [ 00:12:57.326 { 00:12:57.326 "name": "nvmf_tgt_poll_group_000", 00:12:57.326 "admin_qpairs": 0, 00:12:57.326 "io_qpairs": 0, 00:12:57.326 "current_admin_qpairs": 0, 00:12:57.326 "current_io_qpairs": 0, 00:12:57.326 "pending_bdev_io": 0, 00:12:57.326 "completed_nvme_io": 0, 00:12:57.326 "transports": [] 00:12:57.326 }, 00:12:57.326 { 00:12:57.326 "name": "nvmf_tgt_poll_group_001", 00:12:57.326 "admin_qpairs": 0, 00:12:57.326 "io_qpairs": 0, 00:12:57.326 "current_admin_qpairs": 0, 00:12:57.326 "current_io_qpairs": 0, 00:12:57.326 "pending_bdev_io": 0, 00:12:57.326 "completed_nvme_io": 0, 00:12:57.326 "transports": [] 00:12:57.326 }, 00:12:57.326 { 00:12:57.326 "name": "nvmf_tgt_poll_group_002", 00:12:57.326 "admin_qpairs": 0, 00:12:57.326 "io_qpairs": 0, 00:12:57.326 "current_admin_qpairs": 0, 00:12:57.326 "current_io_qpairs": 0, 00:12:57.326 "pending_bdev_io": 0, 00:12:57.326 "completed_nvme_io": 0, 00:12:57.326 "transports": [] 00:12:57.326 }, 00:12:57.326 { 00:12:57.326 "name": "nvmf_tgt_poll_group_003", 00:12:57.326 "admin_qpairs": 0, 00:12:57.326 "io_qpairs": 0, 00:12:57.326 "current_admin_qpairs": 0, 00:12:57.326 "current_io_qpairs": 0, 00:12:57.326 "pending_bdev_io": 0, 00:12:57.326 "completed_nvme_io": 0, 00:12:57.326 "transports": [] 00:12:57.326 } 00:12:57.326 ] 00:12:57.326 }' 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.326 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.587 [2024-10-11 11:46:41.963645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:57.587 "tick_rate": 2400000000, 00:12:57.587 "poll_groups": [ 00:12:57.587 { 00:12:57.587 "name": "nvmf_tgt_poll_group_000", 00:12:57.587 "admin_qpairs": 0, 00:12:57.587 "io_qpairs": 0, 00:12:57.587 "current_admin_qpairs": 0, 00:12:57.587 "current_io_qpairs": 0, 00:12:57.587 "pending_bdev_io": 0, 00:12:57.587 "completed_nvme_io": 0, 00:12:57.587 "transports": [ 00:12:57.587 { 00:12:57.587 "trtype": "TCP" 00:12:57.587 } 00:12:57.587 ] 00:12:57.587 }, 00:12:57.587 { 00:12:57.587 "name": "nvmf_tgt_poll_group_001", 00:12:57.587 "admin_qpairs": 0, 00:12:57.587 "io_qpairs": 0, 00:12:57.587 "current_admin_qpairs": 0, 00:12:57.587 "current_io_qpairs": 0, 00:12:57.587 "pending_bdev_io": 0, 00:12:57.587 "completed_nvme_io": 0, 00:12:57.587 "transports": [ 00:12:57.587 { 00:12:57.587 "trtype": "TCP" 00:12:57.587 } 00:12:57.587 ] 00:12:57.587 }, 00:12:57.587 { 00:12:57.587 "name": "nvmf_tgt_poll_group_002", 00:12:57.587 "admin_qpairs": 0, 00:12:57.587 "io_qpairs": 0, 00:12:57.587 "current_admin_qpairs": 0, 00:12:57.587 "current_io_qpairs": 0, 00:12:57.587 "pending_bdev_io": 0, 00:12:57.587 "completed_nvme_io": 0, 00:12:57.587 "transports": [ 00:12:57.587 { 00:12:57.587 "trtype": "TCP" 00:12:57.587 } 00:12:57.587 ] 00:12:57.587 }, 00:12:57.587 { 00:12:57.587 "name": "nvmf_tgt_poll_group_003", 00:12:57.587 "admin_qpairs": 0, 00:12:57.587 "io_qpairs": 0, 00:12:57.587 "current_admin_qpairs": 0, 00:12:57.587 "current_io_qpairs": 0, 00:12:57.587 "pending_bdev_io": 0, 00:12:57.587 "completed_nvme_io": 0, 00:12:57.587 "transports": [ 00:12:57.587 { 00:12:57.587 "trtype": "TCP" 00:12:57.587 } 00:12:57.587 ] 00:12:57.587 } 00:12:57.587 ] 00:12:57.587 }' 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:57.587 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.587 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:57.587 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:57.587 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:57.587 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:57.587 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.587 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:57.587 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:57.587 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:57.587 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 Malloc1 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 [2024-10-11 11:46:42.175499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:57.588 [2024-10-11 11:46:42.212543] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:57.588 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:57.588 could not add new controller: failed to write to nvme-fabrics device 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.588 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.849 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.849 11:46:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.234 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.234 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.234 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.234 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:59.234 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:01.146 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:01.147 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:01.147 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.147 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:01.147 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.147 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:01.147 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.407 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.407 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.407 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.407 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.408 [2024-10-11 11:46:45.859114] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:01.408 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:01.408 could not add new controller: failed to write to nvme-fabrics device 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.408 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.793 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.793 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.793 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.793 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:02.793 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:04.704 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:04.704 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:04.704 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.704 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:04.704 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.704 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:04.704 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.965 [2024-10-11 11:46:49.455745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.965 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.878 11:46:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.878 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.878 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.878 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.878 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.789 [2024-10-11 11:46:53.158001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.789 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.790 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.790 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.790 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.790 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.790 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.790 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.790 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.172 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.172 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.172 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.172 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.172 11:46:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.715 [2024-10-11 11:46:56.875157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.715 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.100 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.100 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.100 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.100 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:14.100 11:46:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.037 [2024-10-11 11:47:00.556733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.037 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.420 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.420 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:17.420 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.420 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:17.420 11:47:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.960 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.961 [2024-10-11 11:47:04.230758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.961 11:47:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.343 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.343 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:21.343 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.343 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:21.343 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.253 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 [2024-10-11 11:47:07.918386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 [2024-10-11 11:47:07.990555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 [2024-10-11 11:47:08.054734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.514 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.515 [2024-10-11 11:47:08.118916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.515 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 [2024-10-11 11:47:08.191158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:23.776 "tick_rate": 2400000000, 00:13:23.776 "poll_groups": [ 00:13:23.776 { 00:13:23.776 "name": "nvmf_tgt_poll_group_000", 00:13:23.776 "admin_qpairs": 0, 00:13:23.776 "io_qpairs": 224, 00:13:23.776 "current_admin_qpairs": 0, 00:13:23.776 "current_io_qpairs": 0, 00:13:23.776 "pending_bdev_io": 0, 00:13:23.776 "completed_nvme_io": 274, 00:13:23.776 "transports": [ 00:13:23.776 { 00:13:23.776 "trtype": "TCP" 00:13:23.776 } 00:13:23.776 ] 00:13:23.776 }, 00:13:23.776 { 00:13:23.776 "name": "nvmf_tgt_poll_group_001", 00:13:23.776 "admin_qpairs": 1, 00:13:23.776 "io_qpairs": 223, 00:13:23.776 "current_admin_qpairs": 0, 00:13:23.776 "current_io_qpairs": 0, 00:13:23.776 "pending_bdev_io": 0, 00:13:23.776 "completed_nvme_io": 223, 00:13:23.776 "transports": [ 00:13:23.776 { 00:13:23.776 "trtype": "TCP" 00:13:23.776 } 00:13:23.776 ] 00:13:23.776 }, 00:13:23.776 { 00:13:23.776 "name": "nvmf_tgt_poll_group_002", 00:13:23.776 "admin_qpairs": 6, 00:13:23.776 "io_qpairs": 218, 00:13:23.776 "current_admin_qpairs": 0, 00:13:23.776 "current_io_qpairs": 0, 00:13:23.776 "pending_bdev_io": 0, 00:13:23.776 "completed_nvme_io": 220, 00:13:23.776 "transports": [ 00:13:23.776 { 00:13:23.776 "trtype": "TCP" 00:13:23.776 } 00:13:23.776 ] 00:13:23.776 }, 00:13:23.776 { 00:13:23.776 "name": "nvmf_tgt_poll_group_003", 00:13:23.776 "admin_qpairs": 0, 00:13:23.776 "io_qpairs": 224, 00:13:23.776 "current_admin_qpairs": 0, 00:13:23.776 "current_io_qpairs": 0, 00:13:23.776 "pending_bdev_io": 0, 00:13:23.776 "completed_nvme_io": 522, 00:13:23.776 "transports": [ 00:13:23.776 { 00:13:23.776 "trtype": "TCP" 00:13:23.776 } 00:13:23.776 ] 00:13:23.776 } 00:13:23.776 ] 00:13:23.776 }' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.776 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.776 rmmod nvme_tcp 00:13:23.776 rmmod nvme_fabrics 00:13:23.776 rmmod nvme_keyring 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 935127 ']' 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 935127 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 935127 ']' 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 935127 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 935127 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 935127' 00:13:24.036 killing process with pid 935127 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 935127 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 935127 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.036 11:47:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:26.580 00:13:26.580 real 0m37.564s 00:13:26.580 user 1m51.940s 00:13:26.580 sys 0m7.660s 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.580 ************************************ 00:13:26.580 END TEST nvmf_rpc 00:13:26.580 ************************************ 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.580 ************************************ 00:13:26.580 START TEST nvmf_invalid 00:13:26.580 ************************************ 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.580 * Looking for test storage... 00:13:26.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:26.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.580 --rc genhtml_branch_coverage=1 00:13:26.580 --rc genhtml_function_coverage=1 00:13:26.580 --rc genhtml_legend=1 00:13:26.580 --rc geninfo_all_blocks=1 00:13:26.580 --rc geninfo_unexecuted_blocks=1 00:13:26.580 00:13:26.580 ' 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:26.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.580 --rc genhtml_branch_coverage=1 00:13:26.580 --rc genhtml_function_coverage=1 00:13:26.580 --rc genhtml_legend=1 00:13:26.580 --rc geninfo_all_blocks=1 00:13:26.580 --rc geninfo_unexecuted_blocks=1 00:13:26.580 00:13:26.580 ' 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:26.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.580 --rc genhtml_branch_coverage=1 00:13:26.580 --rc genhtml_function_coverage=1 00:13:26.580 --rc genhtml_legend=1 00:13:26.580 --rc geninfo_all_blocks=1 00:13:26.580 --rc geninfo_unexecuted_blocks=1 00:13:26.580 00:13:26.580 ' 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:26.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.580 --rc genhtml_branch_coverage=1 00:13:26.580 --rc genhtml_function_coverage=1 00:13:26.580 --rc genhtml_legend=1 00:13:26.580 --rc geninfo_all_blocks=1 00:13:26.580 --rc geninfo_unexecuted_blocks=1 00:13:26.580 00:13:26.580 ' 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.580 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.581 11:47:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:26.581 11:47:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:34.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:34.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:34.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:34.718 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:34.718 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.719 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:34.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:13:34.719 00:13:34.719 --- 10.0.0.2 ping statistics --- 00:13:34.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.719 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:13:34.719 00:13:34.719 --- 10.0.0.1 ping statistics --- 00:13:34.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.719 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=944845 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 944845 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 944845 ']' 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.719 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.719 [2024-10-11 11:47:18.325096] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:34.719 [2024-10-11 11:47:18.325168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.719 [2024-10-11 11:47:18.412081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.719 [2024-10-11 11:47:18.465119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.719 [2024-10-11 11:47:18.465173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.719 [2024-10-11 11:47:18.465182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.719 [2024-10-11 11:47:18.465192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.719 [2024-10-11 11:47:18.465199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.719 [2024-10-11 11:47:18.467277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.719 [2024-10-11 11:47:18.467436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.719 [2024-10-11 11:47:18.467602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.719 [2024-10-11 11:47:18.467603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.719 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.719 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:34.719 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:34.719 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:34.719 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.719 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.719 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:34.719 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31655 00:13:34.719 [2024-10-11 11:47:19.316571] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:34.719 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:34.719 { 00:13:34.719 "nqn": "nqn.2016-06.io.spdk:cnode31655", 00:13:34.719 "tgt_name": "foobar", 00:13:34.719 "method": "nvmf_create_subsystem", 00:13:34.719 "req_id": 1 00:13:34.719 } 00:13:34.719 Got JSON-RPC error response 00:13:34.719 response: 00:13:34.719 { 00:13:34.719 "code": -32603, 00:13:34.719 "message": "Unable to find target foobar" 00:13:34.719 }' 00:13:34.979 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:34.979 { 00:13:34.979 "nqn": "nqn.2016-06.io.spdk:cnode31655", 00:13:34.979 "tgt_name": "foobar", 00:13:34.979 "method": "nvmf_create_subsystem", 00:13:34.979 "req_id": 1 00:13:34.979 } 00:13:34.979 Got JSON-RPC error response 00:13:34.979 response: 00:13:34.979 { 00:13:34.979 "code": -32603, 00:13:34.979 "message": "Unable to find target foobar" 00:13:34.979 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:34.979 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:34.979 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3667 00:13:34.979 [2024-10-11 11:47:19.509253] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3667: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:34.979 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:34.979 { 00:13:34.979 "nqn": "nqn.2016-06.io.spdk:cnode3667", 00:13:34.979 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:34.979 "method": "nvmf_create_subsystem", 00:13:34.979 "req_id": 1 00:13:34.979 } 00:13:34.979 Got JSON-RPC error response 00:13:34.979 response: 00:13:34.979 { 00:13:34.979 "code": -32602, 00:13:34.979 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:34.979 }' 00:13:34.979 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:34.979 { 00:13:34.979 "nqn": "nqn.2016-06.io.spdk:cnode3667", 00:13:34.979 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:34.979 "method": "nvmf_create_subsystem", 00:13:34.980 "req_id": 1 00:13:34.980 } 00:13:34.980 Got JSON-RPC error response 00:13:34.980 response: 00:13:34.980 { 00:13:34.980 "code": -32602, 00:13:34.980 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:34.980 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:34.980 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:34.980 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26443 00:13:35.241 [2024-10-11 11:47:19.693789] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26443: invalid model number 'SPDK_Controller' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:35.241 { 00:13:35.241 "nqn": "nqn.2016-06.io.spdk:cnode26443", 00:13:35.241 "model_number": "SPDK_Controller\u001f", 00:13:35.241 "method": "nvmf_create_subsystem", 00:13:35.241 "req_id": 1 00:13:35.241 } 00:13:35.241 Got JSON-RPC error response 00:13:35.241 response: 00:13:35.241 { 00:13:35.241 "code": -32602, 00:13:35.241 "message": "Invalid MN SPDK_Controller\u001f" 00:13:35.241 }' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:35.241 { 00:13:35.241 "nqn": "nqn.2016-06.io.spdk:cnode26443", 00:13:35.241 "model_number": "SPDK_Controller\u001f", 00:13:35.241 "method": "nvmf_create_subsystem", 00:13:35.241 "req_id": 1 00:13:35.241 } 00:13:35.241 Got JSON-RPC error response 00:13:35.241 response: 00:13:35.241 { 00:13:35.241 "code": -32602, 00:13:35.241 "message": "Invalid MN SPDK_Controller\u001f" 00:13:35.241 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:35.241 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:35.242 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'NNpIlR8L7ITzbkAVpp+Q\' 00:13:35.502 11:47:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'NNpIlR8L7ITzbkAVpp+Q\' nqn.2016-06.io.spdk:cnode20246 00:13:35.502 [2024-10-11 11:47:20.046927] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20246: invalid serial number 'NNpIlR8L7ITzbkAVpp+Q\' 00:13:35.502 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:35.502 { 00:13:35.502 "nqn": "nqn.2016-06.io.spdk:cnode20246", 00:13:35.502 "serial_number": "NNpIlR8L7ITzbkAVpp+Q\\", 00:13:35.502 "method": "nvmf_create_subsystem", 00:13:35.502 "req_id": 1 00:13:35.502 } 00:13:35.502 Got JSON-RPC error response 00:13:35.502 response: 00:13:35.502 { 00:13:35.502 "code": -32602, 00:13:35.502 "message": "Invalid SN NNpIlR8L7ITzbkAVpp+Q\\" 00:13:35.502 }' 00:13:35.502 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:35.502 { 00:13:35.502 "nqn": "nqn.2016-06.io.spdk:cnode20246", 00:13:35.502 "serial_number": "NNpIlR8L7ITzbkAVpp+Q\\", 00:13:35.502 "method": "nvmf_create_subsystem", 00:13:35.502 "req_id": 1 00:13:35.502 } 00:13:35.502 Got JSON-RPC error response 00:13:35.502 response: 00:13:35.502 { 00:13:35.502 "code": -32602, 00:13:35.502 "message": "Invalid SN NNpIlR8L7ITzbkAVpp+Q\\" 00:13:35.502 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:35.502 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:35.502 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:35.502 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.503 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.764 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:35.765 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:35.766 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:35.766 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.766 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.766 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:35.766 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:36.026 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:36.026 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.026 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.026 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:13:36.026 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '[X6(y(1gkw7kU\u'\''\>t3KJ0(};j6aC{,1Nj 9PZb' 00:13:36.026 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '[X6(y(1gkw7kU\u'\''\>t3KJ0(};j6aC{,1Nj 9PZb' nqn.2016-06.io.spdk:cnode9562 00:13:36.026 [2024-10-11 11:47:20.552560] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9562: invalid model number '[X6(y(1gkw7kU\u'\>t3KJ0(};j6aC{,1Nj 9PZb' 00:13:36.026 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:36.026 { 00:13:36.026 "nqn": "nqn.2016-06.io.spdk:cnode9562", 00:13:36.026 "model_number": "[X6(y(1gkw7kU\\u'\''\\>t3KJ0(};j6aC{,1Nj\u007f 9PZb", 00:13:36.026 "method": "nvmf_create_subsystem", 00:13:36.026 "req_id": 1 00:13:36.026 } 00:13:36.026 Got JSON-RPC error response 00:13:36.026 response: 00:13:36.026 { 00:13:36.026 "code": -32602, 00:13:36.026 "message": "Invalid MN [X6(y(1gkw7kU\\u'\''\\>t3KJ0(};j6aC{,1Nj\u007f 9PZb" 00:13:36.026 }' 00:13:36.026 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:36.026 { 00:13:36.026 "nqn": "nqn.2016-06.io.spdk:cnode9562", 00:13:36.026 "model_number": "[X6(y(1gkw7kU\\u'\\>t3KJ0(};j6aC{,1Nj\u007f 9PZb", 00:13:36.026 "method": "nvmf_create_subsystem", 00:13:36.026 "req_id": 1 00:13:36.026 } 00:13:36.026 Got JSON-RPC error response 00:13:36.026 response: 00:13:36.026 { 00:13:36.026 "code": -32602, 00:13:36.026 "message": "Invalid MN [X6(y(1gkw7kU\\u'\\>t3KJ0(};j6aC{,1Nj\u007f 9PZb" 00:13:36.026 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:36.026 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:36.287 [2024-10-11 11:47:20.741245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.287 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:36.548 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:36.548 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:36.548 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:36.548 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:36.548 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:36.548 [2024-10-11 11:47:21.123911] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:36.548 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:36.548 { 00:13:36.548 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:36.548 "listen_address": { 00:13:36.548 "trtype": "tcp", 00:13:36.548 "traddr": "", 00:13:36.548 "trsvcid": "4421" 00:13:36.548 }, 00:13:36.548 "method": "nvmf_subsystem_remove_listener", 00:13:36.548 "req_id": 1 00:13:36.548 } 00:13:36.548 Got JSON-RPC error response 00:13:36.548 response: 00:13:36.548 { 00:13:36.548 "code": -32602, 00:13:36.548 "message": "Invalid parameters" 00:13:36.548 }' 00:13:36.548 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:36.548 { 00:13:36.548 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:36.548 "listen_address": { 00:13:36.548 "trtype": "tcp", 00:13:36.548 "traddr": "", 00:13:36.548 "trsvcid": "4421" 00:13:36.548 }, 00:13:36.548 "method": "nvmf_subsystem_remove_listener", 00:13:36.548 "req_id": 1 00:13:36.548 } 00:13:36.548 Got JSON-RPC error response 00:13:36.548 response: 00:13:36.548 { 00:13:36.548 "code": -32602, 00:13:36.549 "message": "Invalid parameters" 00:13:36.549 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:36.549 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16343 -i 0 00:13:36.810 [2024-10-11 11:47:21.312468] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16343: invalid cntlid range [0-65519] 00:13:36.810 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:36.810 { 00:13:36.810 "nqn": "nqn.2016-06.io.spdk:cnode16343", 00:13:36.810 "min_cntlid": 0, 00:13:36.810 "method": "nvmf_create_subsystem", 00:13:36.810 "req_id": 1 00:13:36.810 } 00:13:36.810 Got JSON-RPC error response 00:13:36.810 response: 00:13:36.810 { 00:13:36.810 "code": -32602, 00:13:36.810 "message": "Invalid cntlid range [0-65519]" 00:13:36.810 }' 00:13:36.810 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:36.810 { 00:13:36.810 "nqn": "nqn.2016-06.io.spdk:cnode16343", 00:13:36.810 "min_cntlid": 0, 00:13:36.810 "method": "nvmf_create_subsystem", 00:13:36.810 "req_id": 1 00:13:36.810 } 00:13:36.810 Got JSON-RPC error response 00:13:36.810 response: 00:13:36.810 { 00:13:36.810 "code": -32602, 00:13:36.810 "message": "Invalid cntlid range [0-65519]" 00:13:36.810 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.810 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5988 -i 65520 00:13:37.070 [2024-10-11 11:47:21.501109] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5988: invalid cntlid range [65520-65519] 00:13:37.070 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:37.070 { 00:13:37.070 "nqn": "nqn.2016-06.io.spdk:cnode5988", 00:13:37.070 "min_cntlid": 65520, 00:13:37.070 "method": "nvmf_create_subsystem", 00:13:37.070 "req_id": 1 00:13:37.070 } 00:13:37.070 Got JSON-RPC error response 00:13:37.070 response: 00:13:37.070 { 00:13:37.070 "code": -32602, 00:13:37.070 "message": "Invalid cntlid range [65520-65519]" 00:13:37.070 }' 00:13:37.070 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:37.070 { 00:13:37.070 "nqn": "nqn.2016-06.io.spdk:cnode5988", 00:13:37.070 "min_cntlid": 65520, 00:13:37.070 "method": "nvmf_create_subsystem", 00:13:37.070 "req_id": 1 00:13:37.070 } 00:13:37.070 Got JSON-RPC error response 00:13:37.070 response: 00:13:37.070 { 00:13:37.070 "code": -32602, 00:13:37.070 "message": "Invalid cntlid range [65520-65519]" 00:13:37.070 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.070 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1307 -I 0 00:13:37.070 [2024-10-11 11:47:21.689704] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1307: invalid cntlid range [1-0] 00:13:37.330 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:37.330 { 00:13:37.330 "nqn": "nqn.2016-06.io.spdk:cnode1307", 00:13:37.330 "max_cntlid": 0, 00:13:37.330 "method": "nvmf_create_subsystem", 00:13:37.330 "req_id": 1 00:13:37.330 } 00:13:37.330 Got JSON-RPC error response 00:13:37.330 response: 00:13:37.330 { 00:13:37.330 "code": -32602, 00:13:37.330 "message": "Invalid cntlid range [1-0]" 00:13:37.330 }' 00:13:37.330 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:37.330 { 00:13:37.330 "nqn": "nqn.2016-06.io.spdk:cnode1307", 00:13:37.330 "max_cntlid": 0, 00:13:37.330 "method": "nvmf_create_subsystem", 00:13:37.330 "req_id": 1 00:13:37.330 } 00:13:37.330 Got JSON-RPC error response 00:13:37.330 response: 00:13:37.330 { 00:13:37.330 "code": -32602, 00:13:37.330 "message": "Invalid cntlid range [1-0]" 00:13:37.330 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.330 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30379 -I 65520 00:13:37.330 [2024-10-11 11:47:21.870271] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30379: invalid cntlid range [1-65520] 00:13:37.330 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:37.330 { 00:13:37.330 "nqn": "nqn.2016-06.io.spdk:cnode30379", 00:13:37.330 "max_cntlid": 65520, 00:13:37.330 "method": "nvmf_create_subsystem", 00:13:37.330 "req_id": 1 00:13:37.330 } 00:13:37.330 Got JSON-RPC error response 00:13:37.330 response: 00:13:37.330 { 00:13:37.330 "code": -32602, 00:13:37.330 "message": "Invalid cntlid range [1-65520]" 00:13:37.330 }' 00:13:37.330 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:37.330 { 00:13:37.330 "nqn": "nqn.2016-06.io.spdk:cnode30379", 00:13:37.330 "max_cntlid": 65520, 00:13:37.330 "method": "nvmf_create_subsystem", 00:13:37.330 "req_id": 1 00:13:37.330 } 00:13:37.330 Got JSON-RPC error response 00:13:37.330 response: 00:13:37.330 { 00:13:37.330 "code": -32602, 00:13:37.330 "message": "Invalid cntlid range [1-65520]" 00:13:37.330 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.330 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23100 -i 6 -I 5 00:13:37.590 [2024-10-11 11:47:22.058887] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23100: invalid cntlid range [6-5] 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:37.590 { 00:13:37.590 "nqn": "nqn.2016-06.io.spdk:cnode23100", 00:13:37.590 "min_cntlid": 6, 00:13:37.590 "max_cntlid": 5, 00:13:37.590 "method": "nvmf_create_subsystem", 00:13:37.590 "req_id": 1 00:13:37.590 } 00:13:37.590 Got JSON-RPC error response 00:13:37.590 response: 00:13:37.590 { 00:13:37.590 "code": -32602, 00:13:37.590 "message": "Invalid cntlid range [6-5]" 00:13:37.590 }' 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:37.590 { 00:13:37.590 "nqn": "nqn.2016-06.io.spdk:cnode23100", 00:13:37.590 "min_cntlid": 6, 00:13:37.590 "max_cntlid": 5, 00:13:37.590 "method": "nvmf_create_subsystem", 00:13:37.590 "req_id": 1 00:13:37.590 } 00:13:37.590 Got JSON-RPC error response 00:13:37.590 response: 00:13:37.590 { 00:13:37.590 "code": -32602, 00:13:37.590 "message": "Invalid cntlid range [6-5]" 00:13:37.590 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:37.590 { 00:13:37.590 "name": "foobar", 00:13:37.590 "method": "nvmf_delete_target", 00:13:37.590 "req_id": 1 00:13:37.590 } 00:13:37.590 Got JSON-RPC error response 00:13:37.590 response: 00:13:37.590 { 00:13:37.590 "code": -32602, 00:13:37.590 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:37.590 }' 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:37.590 { 00:13:37.590 "name": "foobar", 00:13:37.590 "method": "nvmf_delete_target", 00:13:37.590 "req_id": 1 00:13:37.590 } 00:13:37.590 Got JSON-RPC error response 00:13:37.590 response: 00:13:37.590 { 00:13:37.590 "code": -32602, 00:13:37.590 "message": "The specified target doesn't exist, cannot delete it." 00:13:37.590 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:37.590 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:37.590 rmmod nvme_tcp 00:13:37.851 rmmod nvme_fabrics 00:13:37.851 rmmod nvme_keyring 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 944845 ']' 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 944845 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 944845 ']' 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 944845 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 944845 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 944845' 00:13:37.851 killing process with pid 944845 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 944845 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 944845 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.851 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:40.399 00:13:40.399 real 0m13.745s 00:13:40.399 user 0m20.437s 00:13:40.399 sys 0m6.422s 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:40.399 ************************************ 00:13:40.399 END TEST nvmf_invalid 00:13:40.399 ************************************ 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.399 ************************************ 00:13:40.399 START TEST nvmf_connect_stress 00:13:40.399 ************************************ 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.399 * Looking for test storage... 00:13:40.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:40.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.399 --rc genhtml_branch_coverage=1 00:13:40.399 --rc genhtml_function_coverage=1 00:13:40.399 --rc genhtml_legend=1 00:13:40.399 --rc geninfo_all_blocks=1 00:13:40.399 --rc geninfo_unexecuted_blocks=1 00:13:40.399 00:13:40.399 ' 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:40.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.399 --rc genhtml_branch_coverage=1 00:13:40.399 --rc genhtml_function_coverage=1 00:13:40.399 --rc genhtml_legend=1 00:13:40.399 --rc geninfo_all_blocks=1 00:13:40.399 --rc geninfo_unexecuted_blocks=1 00:13:40.399 00:13:40.399 ' 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:40.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.399 --rc genhtml_branch_coverage=1 00:13:40.399 --rc genhtml_function_coverage=1 00:13:40.399 --rc genhtml_legend=1 00:13:40.399 --rc geninfo_all_blocks=1 00:13:40.399 --rc geninfo_unexecuted_blocks=1 00:13:40.399 00:13:40.399 ' 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:40.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.399 --rc genhtml_branch_coverage=1 00:13:40.399 --rc genhtml_function_coverage=1 00:13:40.399 --rc genhtml_legend=1 00:13:40.399 --rc geninfo_all_blocks=1 00:13:40.399 --rc geninfo_unexecuted_blocks=1 00:13:40.399 00:13:40.399 ' 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.399 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:40.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:40.400 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:48.540 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:48.540 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:48.540 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:48.540 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.540 11:47:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.540 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.540 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.540 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.540 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.540 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.540 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.540 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.540 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:13:48.540 00:13:48.540 --- 10.0.0.2 ping statistics --- 00:13:48.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.540 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:13:48.540 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:13:48.540 00:13:48.541 --- 10.0.0.1 ping statistics --- 00:13:48.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.541 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=950025 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 950025 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 950025 ']' 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.541 11:47:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.541 [2024-10-11 11:47:32.279260] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:13:48.541 [2024-10-11 11:47:32.279330] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.541 [2024-10-11 11:47:32.367799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:48.541 [2024-10-11 11:47:32.419730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.541 [2024-10-11 11:47:32.419780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.541 [2024-10-11 11:47:32.419788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.541 [2024-10-11 11:47:32.419796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.541 [2024-10-11 11:47:32.419802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.541 [2024-10-11 11:47:32.421880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.541 [2024-10-11 11:47:32.422110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.541 [2024-10-11 11:47:32.422110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.541 [2024-10-11 11:47:33.116691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.541 [2024-10-11 11:47:33.141114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.541 NULL1 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=950192 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:48.541 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.811 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.812 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.082 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.082 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:49.082 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.082 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.082 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.343 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.343 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:49.343 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.343 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.343 11:47:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.915 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.915 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:49.915 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.915 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.915 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.175 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.175 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:50.175 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.176 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.176 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.436 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.436 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:50.436 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.436 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.436 11:47:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.696 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.696 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:50.697 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.697 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.697 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.957 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.957 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:50.957 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.957 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.957 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.527 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.527 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:51.527 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.527 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.527 11:47:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.787 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.787 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:51.787 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.787 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.787 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.048 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.048 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:52.048 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.048 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.048 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.309 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.309 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:52.309 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.309 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.309 11:47:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.569 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.569 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:52.569 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.569 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.569 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.139 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.139 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:53.139 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.139 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.139 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.399 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.399 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:53.399 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.399 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.399 11:47:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.659 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.659 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:53.659 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.659 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.659 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.919 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.919 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:53.919 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.919 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.919 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.179 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.179 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:54.179 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.179 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.179 11:47:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.747 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:54.747 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.747 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.747 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.007 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.007 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:55.007 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.007 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.007 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.267 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.267 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:55.267 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.267 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.268 11:47:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.528 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.528 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:55.528 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.528 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.528 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.788 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.788 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:55.788 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.788 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.788 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.356 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.356 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:56.356 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.356 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.356 11:47:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.617 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.617 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:56.617 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.617 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.617 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.877 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.877 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:56.877 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.877 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.877 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.138 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.138 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:57.138 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.138 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.138 11:47:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.708 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.708 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:57.708 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.708 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.709 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.969 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.969 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:57.969 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.969 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.969 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.229 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.229 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:58.229 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.229 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.229 11:47:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.490 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.490 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:58.490 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.490 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.490 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.750 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 950192 00:13:58.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (950192) - No such process 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 950192 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.750 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.750 rmmod nvme_tcp 00:13:58.750 rmmod nvme_fabrics 00:13:59.012 rmmod nvme_keyring 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 950025 ']' 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 950025 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 950025 ']' 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 950025 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 950025 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 950025' 00:13:59.012 killing process with pid 950025 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 950025 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 950025 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.012 11:47:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:01.556 00:14:01.556 real 0m21.074s 00:14:01.556 user 0m42.149s 00:14:01.556 sys 0m9.125s 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.556 ************************************ 00:14:01.556 END TEST nvmf_connect_stress 00:14:01.556 ************************************ 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.556 ************************************ 00:14:01.556 START TEST nvmf_fused_ordering 00:14:01.556 ************************************ 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:01.556 * Looking for test storage... 00:14:01.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:01.556 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:01.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.557 --rc genhtml_branch_coverage=1 00:14:01.557 --rc genhtml_function_coverage=1 00:14:01.557 --rc genhtml_legend=1 00:14:01.557 --rc geninfo_all_blocks=1 00:14:01.557 --rc geninfo_unexecuted_blocks=1 00:14:01.557 00:14:01.557 ' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:01.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.557 --rc genhtml_branch_coverage=1 00:14:01.557 --rc genhtml_function_coverage=1 00:14:01.557 --rc genhtml_legend=1 00:14:01.557 --rc geninfo_all_blocks=1 00:14:01.557 --rc geninfo_unexecuted_blocks=1 00:14:01.557 00:14:01.557 ' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:01.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.557 --rc genhtml_branch_coverage=1 00:14:01.557 --rc genhtml_function_coverage=1 00:14:01.557 --rc genhtml_legend=1 00:14:01.557 --rc geninfo_all_blocks=1 00:14:01.557 --rc geninfo_unexecuted_blocks=1 00:14:01.557 00:14:01.557 ' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:01.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.557 --rc genhtml_branch_coverage=1 00:14:01.557 --rc genhtml_function_coverage=1 00:14:01.557 --rc genhtml_legend=1 00:14:01.557 --rc geninfo_all_blocks=1 00:14:01.557 --rc geninfo_unexecuted_blocks=1 00:14:01.557 00:14:01.557 ' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.557 11:47:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.557 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:01.557 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:01.557 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:01.558 11:47:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.699 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:09.700 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:09.700 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:09.700 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:09.700 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:09.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:14:09.700 00:14:09.700 --- 10.0.0.2 ping statistics --- 00:14:09.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.700 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:14:09.700 00:14:09.700 --- 10.0.0.1 ping statistics --- 00:14:09.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.700 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=956415 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 956415 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 956415 ']' 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.700 11:47:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.700 [2024-10-11 11:47:53.542528] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:09.701 [2024-10-11 11:47:53.542594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.701 [2024-10-11 11:47:53.632826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.701 [2024-10-11 11:47:53.683436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.701 [2024-10-11 11:47:53.683486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.701 [2024-10-11 11:47:53.683495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.701 [2024-10-11 11:47:53.683502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.701 [2024-10-11 11:47:53.683509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.701 [2024-10-11 11:47:53.684300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.962 [2024-10-11 11:47:54.413683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.962 [2024-10-11 11:47:54.437962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.962 NULL1 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.962 11:47:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:09.962 [2024-10-11 11:47:54.508532] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:09.962 [2024-10-11 11:47:54.508576] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956724 ] 00:14:10.533 Attached to nqn.2016-06.io.spdk:cnode1 00:14:10.533 Namespace ID: 1 size: 1GB 00:14:10.533 fused_ordering(0) 00:14:10.533 fused_ordering(1) 00:14:10.533 fused_ordering(2) 00:14:10.533 fused_ordering(3) 00:14:10.533 fused_ordering(4) 00:14:10.533 fused_ordering(5) 00:14:10.533 fused_ordering(6) 00:14:10.533 fused_ordering(7) 00:14:10.533 fused_ordering(8) 00:14:10.533 fused_ordering(9) 00:14:10.533 fused_ordering(10) 00:14:10.533 fused_ordering(11) 00:14:10.533 fused_ordering(12) 00:14:10.533 fused_ordering(13) 00:14:10.533 fused_ordering(14) 00:14:10.534 fused_ordering(15) 00:14:10.534 fused_ordering(16) 00:14:10.534 fused_ordering(17) 00:14:10.534 fused_ordering(18) 00:14:10.534 fused_ordering(19) 00:14:10.534 fused_ordering(20) 00:14:10.534 fused_ordering(21) 00:14:10.534 fused_ordering(22) 00:14:10.534 fused_ordering(23) 00:14:10.534 fused_ordering(24) 00:14:10.534 fused_ordering(25) 00:14:10.534 fused_ordering(26) 00:14:10.534 fused_ordering(27) 00:14:10.534 fused_ordering(28) 00:14:10.534 fused_ordering(29) 00:14:10.534 fused_ordering(30) 00:14:10.534 fused_ordering(31) 00:14:10.534 fused_ordering(32) 00:14:10.534 fused_ordering(33) 00:14:10.534 fused_ordering(34) 00:14:10.534 fused_ordering(35) 00:14:10.534 fused_ordering(36) 00:14:10.534 fused_ordering(37) 00:14:10.534 fused_ordering(38) 00:14:10.534 fused_ordering(39) 00:14:10.534 fused_ordering(40) 00:14:10.534 fused_ordering(41) 00:14:10.534 fused_ordering(42) 00:14:10.534 fused_ordering(43) 00:14:10.534 fused_ordering(44) 00:14:10.534 fused_ordering(45) 00:14:10.534 fused_ordering(46) 00:14:10.534 fused_ordering(47) 00:14:10.534 fused_ordering(48) 00:14:10.534 fused_ordering(49) 00:14:10.534 fused_ordering(50) 00:14:10.534 fused_ordering(51) 00:14:10.534 fused_ordering(52) 00:14:10.534 fused_ordering(53) 00:14:10.534 fused_ordering(54) 00:14:10.534 fused_ordering(55) 00:14:10.534 fused_ordering(56) 00:14:10.534 fused_ordering(57) 00:14:10.534 fused_ordering(58) 00:14:10.534 fused_ordering(59) 00:14:10.534 fused_ordering(60) 00:14:10.534 fused_ordering(61) 00:14:10.534 fused_ordering(62) 00:14:10.534 fused_ordering(63) 00:14:10.534 fused_ordering(64) 00:14:10.534 fused_ordering(65) 00:14:10.534 fused_ordering(66) 00:14:10.534 fused_ordering(67) 00:14:10.534 fused_ordering(68) 00:14:10.534 fused_ordering(69) 00:14:10.534 fused_ordering(70) 00:14:10.534 fused_ordering(71) 00:14:10.534 fused_ordering(72) 00:14:10.534 fused_ordering(73) 00:14:10.534 fused_ordering(74) 00:14:10.534 fused_ordering(75) 00:14:10.534 fused_ordering(76) 00:14:10.534 fused_ordering(77) 00:14:10.534 fused_ordering(78) 00:14:10.534 fused_ordering(79) 00:14:10.534 fused_ordering(80) 00:14:10.534 fused_ordering(81) 00:14:10.534 fused_ordering(82) 00:14:10.534 fused_ordering(83) 00:14:10.534 fused_ordering(84) 00:14:10.534 fused_ordering(85) 00:14:10.534 fused_ordering(86) 00:14:10.534 fused_ordering(87) 00:14:10.534 fused_ordering(88) 00:14:10.534 fused_ordering(89) 00:14:10.534 fused_ordering(90) 00:14:10.534 fused_ordering(91) 00:14:10.534 fused_ordering(92) 00:14:10.534 fused_ordering(93) 00:14:10.534 fused_ordering(94) 00:14:10.534 fused_ordering(95) 00:14:10.534 fused_ordering(96) 00:14:10.534 fused_ordering(97) 00:14:10.534 fused_ordering(98) 00:14:10.534 fused_ordering(99) 00:14:10.534 fused_ordering(100) 00:14:10.534 fused_ordering(101) 00:14:10.534 fused_ordering(102) 00:14:10.534 fused_ordering(103) 00:14:10.534 fused_ordering(104) 00:14:10.534 fused_ordering(105) 00:14:10.534 fused_ordering(106) 00:14:10.534 fused_ordering(107) 00:14:10.534 fused_ordering(108) 00:14:10.534 fused_ordering(109) 00:14:10.534 fused_ordering(110) 00:14:10.534 fused_ordering(111) 00:14:10.534 fused_ordering(112) 00:14:10.534 fused_ordering(113) 00:14:10.534 fused_ordering(114) 00:14:10.534 fused_ordering(115) 00:14:10.534 fused_ordering(116) 00:14:10.534 fused_ordering(117) 00:14:10.534 fused_ordering(118) 00:14:10.534 fused_ordering(119) 00:14:10.534 fused_ordering(120) 00:14:10.534 fused_ordering(121) 00:14:10.534 fused_ordering(122) 00:14:10.534 fused_ordering(123) 00:14:10.534 fused_ordering(124) 00:14:10.534 fused_ordering(125) 00:14:10.534 fused_ordering(126) 00:14:10.534 fused_ordering(127) 00:14:10.534 fused_ordering(128) 00:14:10.534 fused_ordering(129) 00:14:10.534 fused_ordering(130) 00:14:10.534 fused_ordering(131) 00:14:10.534 fused_ordering(132) 00:14:10.534 fused_ordering(133) 00:14:10.534 fused_ordering(134) 00:14:10.534 fused_ordering(135) 00:14:10.534 fused_ordering(136) 00:14:10.534 fused_ordering(137) 00:14:10.534 fused_ordering(138) 00:14:10.534 fused_ordering(139) 00:14:10.534 fused_ordering(140) 00:14:10.534 fused_ordering(141) 00:14:10.534 fused_ordering(142) 00:14:10.534 fused_ordering(143) 00:14:10.534 fused_ordering(144) 00:14:10.534 fused_ordering(145) 00:14:10.534 fused_ordering(146) 00:14:10.534 fused_ordering(147) 00:14:10.534 fused_ordering(148) 00:14:10.534 fused_ordering(149) 00:14:10.534 fused_ordering(150) 00:14:10.534 fused_ordering(151) 00:14:10.534 fused_ordering(152) 00:14:10.534 fused_ordering(153) 00:14:10.534 fused_ordering(154) 00:14:10.534 fused_ordering(155) 00:14:10.534 fused_ordering(156) 00:14:10.534 fused_ordering(157) 00:14:10.534 fused_ordering(158) 00:14:10.534 fused_ordering(159) 00:14:10.534 fused_ordering(160) 00:14:10.534 fused_ordering(161) 00:14:10.534 fused_ordering(162) 00:14:10.534 fused_ordering(163) 00:14:10.534 fused_ordering(164) 00:14:10.534 fused_ordering(165) 00:14:10.534 fused_ordering(166) 00:14:10.534 fused_ordering(167) 00:14:10.534 fused_ordering(168) 00:14:10.534 fused_ordering(169) 00:14:10.534 fused_ordering(170) 00:14:10.534 fused_ordering(171) 00:14:10.534 fused_ordering(172) 00:14:10.534 fused_ordering(173) 00:14:10.534 fused_ordering(174) 00:14:10.534 fused_ordering(175) 00:14:10.534 fused_ordering(176) 00:14:10.534 fused_ordering(177) 00:14:10.534 fused_ordering(178) 00:14:10.534 fused_ordering(179) 00:14:10.534 fused_ordering(180) 00:14:10.534 fused_ordering(181) 00:14:10.534 fused_ordering(182) 00:14:10.534 fused_ordering(183) 00:14:10.534 fused_ordering(184) 00:14:10.534 fused_ordering(185) 00:14:10.534 fused_ordering(186) 00:14:10.534 fused_ordering(187) 00:14:10.534 fused_ordering(188) 00:14:10.534 fused_ordering(189) 00:14:10.534 fused_ordering(190) 00:14:10.534 fused_ordering(191) 00:14:10.534 fused_ordering(192) 00:14:10.534 fused_ordering(193) 00:14:10.534 fused_ordering(194) 00:14:10.534 fused_ordering(195) 00:14:10.534 fused_ordering(196) 00:14:10.534 fused_ordering(197) 00:14:10.534 fused_ordering(198) 00:14:10.534 fused_ordering(199) 00:14:10.534 fused_ordering(200) 00:14:10.534 fused_ordering(201) 00:14:10.534 fused_ordering(202) 00:14:10.534 fused_ordering(203) 00:14:10.534 fused_ordering(204) 00:14:10.534 fused_ordering(205) 00:14:10.795 fused_ordering(206) 00:14:10.795 fused_ordering(207) 00:14:10.795 fused_ordering(208) 00:14:10.795 fused_ordering(209) 00:14:10.795 fused_ordering(210) 00:14:10.795 fused_ordering(211) 00:14:10.795 fused_ordering(212) 00:14:10.795 fused_ordering(213) 00:14:10.795 fused_ordering(214) 00:14:10.795 fused_ordering(215) 00:14:10.795 fused_ordering(216) 00:14:10.795 fused_ordering(217) 00:14:10.795 fused_ordering(218) 00:14:10.795 fused_ordering(219) 00:14:10.795 fused_ordering(220) 00:14:10.795 fused_ordering(221) 00:14:10.795 fused_ordering(222) 00:14:10.795 fused_ordering(223) 00:14:10.795 fused_ordering(224) 00:14:10.795 fused_ordering(225) 00:14:10.795 fused_ordering(226) 00:14:10.795 fused_ordering(227) 00:14:10.795 fused_ordering(228) 00:14:10.795 fused_ordering(229) 00:14:10.795 fused_ordering(230) 00:14:10.795 fused_ordering(231) 00:14:10.795 fused_ordering(232) 00:14:10.795 fused_ordering(233) 00:14:10.795 fused_ordering(234) 00:14:10.795 fused_ordering(235) 00:14:10.795 fused_ordering(236) 00:14:10.795 fused_ordering(237) 00:14:10.795 fused_ordering(238) 00:14:10.795 fused_ordering(239) 00:14:10.795 fused_ordering(240) 00:14:10.795 fused_ordering(241) 00:14:10.795 fused_ordering(242) 00:14:10.795 fused_ordering(243) 00:14:10.795 fused_ordering(244) 00:14:10.795 fused_ordering(245) 00:14:10.795 fused_ordering(246) 00:14:10.795 fused_ordering(247) 00:14:10.795 fused_ordering(248) 00:14:10.795 fused_ordering(249) 00:14:10.795 fused_ordering(250) 00:14:10.796 fused_ordering(251) 00:14:10.796 fused_ordering(252) 00:14:10.796 fused_ordering(253) 00:14:10.796 fused_ordering(254) 00:14:10.796 fused_ordering(255) 00:14:10.796 fused_ordering(256) 00:14:10.796 fused_ordering(257) 00:14:10.796 fused_ordering(258) 00:14:10.796 fused_ordering(259) 00:14:10.796 fused_ordering(260) 00:14:10.796 fused_ordering(261) 00:14:10.796 fused_ordering(262) 00:14:10.796 fused_ordering(263) 00:14:10.796 fused_ordering(264) 00:14:10.796 fused_ordering(265) 00:14:10.796 fused_ordering(266) 00:14:10.796 fused_ordering(267) 00:14:10.796 fused_ordering(268) 00:14:10.796 fused_ordering(269) 00:14:10.796 fused_ordering(270) 00:14:10.796 fused_ordering(271) 00:14:10.796 fused_ordering(272) 00:14:10.796 fused_ordering(273) 00:14:10.796 fused_ordering(274) 00:14:10.796 fused_ordering(275) 00:14:10.796 fused_ordering(276) 00:14:10.796 fused_ordering(277) 00:14:10.796 fused_ordering(278) 00:14:10.796 fused_ordering(279) 00:14:10.796 fused_ordering(280) 00:14:10.796 fused_ordering(281) 00:14:10.796 fused_ordering(282) 00:14:10.796 fused_ordering(283) 00:14:10.796 fused_ordering(284) 00:14:10.796 fused_ordering(285) 00:14:10.796 fused_ordering(286) 00:14:10.796 fused_ordering(287) 00:14:10.796 fused_ordering(288) 00:14:10.796 fused_ordering(289) 00:14:10.796 fused_ordering(290) 00:14:10.796 fused_ordering(291) 00:14:10.796 fused_ordering(292) 00:14:10.796 fused_ordering(293) 00:14:10.796 fused_ordering(294) 00:14:10.796 fused_ordering(295) 00:14:10.796 fused_ordering(296) 00:14:10.796 fused_ordering(297) 00:14:10.796 fused_ordering(298) 00:14:10.796 fused_ordering(299) 00:14:10.796 fused_ordering(300) 00:14:10.796 fused_ordering(301) 00:14:10.796 fused_ordering(302) 00:14:10.796 fused_ordering(303) 00:14:10.796 fused_ordering(304) 00:14:10.796 fused_ordering(305) 00:14:10.796 fused_ordering(306) 00:14:10.796 fused_ordering(307) 00:14:10.796 fused_ordering(308) 00:14:10.796 fused_ordering(309) 00:14:10.796 fused_ordering(310) 00:14:10.796 fused_ordering(311) 00:14:10.796 fused_ordering(312) 00:14:10.796 fused_ordering(313) 00:14:10.796 fused_ordering(314) 00:14:10.796 fused_ordering(315) 00:14:10.796 fused_ordering(316) 00:14:10.796 fused_ordering(317) 00:14:10.796 fused_ordering(318) 00:14:10.796 fused_ordering(319) 00:14:10.796 fused_ordering(320) 00:14:10.796 fused_ordering(321) 00:14:10.796 fused_ordering(322) 00:14:10.796 fused_ordering(323) 00:14:10.796 fused_ordering(324) 00:14:10.796 fused_ordering(325) 00:14:10.796 fused_ordering(326) 00:14:10.796 fused_ordering(327) 00:14:10.796 fused_ordering(328) 00:14:10.796 fused_ordering(329) 00:14:10.796 fused_ordering(330) 00:14:10.796 fused_ordering(331) 00:14:10.796 fused_ordering(332) 00:14:10.796 fused_ordering(333) 00:14:10.796 fused_ordering(334) 00:14:10.796 fused_ordering(335) 00:14:10.796 fused_ordering(336) 00:14:10.796 fused_ordering(337) 00:14:10.796 fused_ordering(338) 00:14:10.796 fused_ordering(339) 00:14:10.796 fused_ordering(340) 00:14:10.796 fused_ordering(341) 00:14:10.796 fused_ordering(342) 00:14:10.796 fused_ordering(343) 00:14:10.796 fused_ordering(344) 00:14:10.796 fused_ordering(345) 00:14:10.796 fused_ordering(346) 00:14:10.796 fused_ordering(347) 00:14:10.796 fused_ordering(348) 00:14:10.796 fused_ordering(349) 00:14:10.796 fused_ordering(350) 00:14:10.796 fused_ordering(351) 00:14:10.796 fused_ordering(352) 00:14:10.796 fused_ordering(353) 00:14:10.796 fused_ordering(354) 00:14:10.796 fused_ordering(355) 00:14:10.796 fused_ordering(356) 00:14:10.796 fused_ordering(357) 00:14:10.796 fused_ordering(358) 00:14:10.796 fused_ordering(359) 00:14:10.796 fused_ordering(360) 00:14:10.796 fused_ordering(361) 00:14:10.796 fused_ordering(362) 00:14:10.796 fused_ordering(363) 00:14:10.796 fused_ordering(364) 00:14:10.796 fused_ordering(365) 00:14:10.796 fused_ordering(366) 00:14:10.796 fused_ordering(367) 00:14:10.796 fused_ordering(368) 00:14:10.796 fused_ordering(369) 00:14:10.796 fused_ordering(370) 00:14:10.796 fused_ordering(371) 00:14:10.796 fused_ordering(372) 00:14:10.796 fused_ordering(373) 00:14:10.796 fused_ordering(374) 00:14:10.796 fused_ordering(375) 00:14:10.796 fused_ordering(376) 00:14:10.796 fused_ordering(377) 00:14:10.796 fused_ordering(378) 00:14:10.796 fused_ordering(379) 00:14:10.796 fused_ordering(380) 00:14:10.796 fused_ordering(381) 00:14:10.796 fused_ordering(382) 00:14:10.796 fused_ordering(383) 00:14:10.796 fused_ordering(384) 00:14:10.796 fused_ordering(385) 00:14:10.796 fused_ordering(386) 00:14:10.796 fused_ordering(387) 00:14:10.796 fused_ordering(388) 00:14:10.796 fused_ordering(389) 00:14:10.796 fused_ordering(390) 00:14:10.796 fused_ordering(391) 00:14:10.796 fused_ordering(392) 00:14:10.796 fused_ordering(393) 00:14:10.796 fused_ordering(394) 00:14:10.796 fused_ordering(395) 00:14:10.796 fused_ordering(396) 00:14:10.796 fused_ordering(397) 00:14:10.796 fused_ordering(398) 00:14:10.796 fused_ordering(399) 00:14:10.796 fused_ordering(400) 00:14:10.796 fused_ordering(401) 00:14:10.796 fused_ordering(402) 00:14:10.796 fused_ordering(403) 00:14:10.796 fused_ordering(404) 00:14:10.796 fused_ordering(405) 00:14:10.796 fused_ordering(406) 00:14:10.796 fused_ordering(407) 00:14:10.796 fused_ordering(408) 00:14:10.796 fused_ordering(409) 00:14:10.796 fused_ordering(410) 00:14:11.368 fused_ordering(411) 00:14:11.368 fused_ordering(412) 00:14:11.368 fused_ordering(413) 00:14:11.368 fused_ordering(414) 00:14:11.368 fused_ordering(415) 00:14:11.368 fused_ordering(416) 00:14:11.369 fused_ordering(417) 00:14:11.369 fused_ordering(418) 00:14:11.369 fused_ordering(419) 00:14:11.369 fused_ordering(420) 00:14:11.369 fused_ordering(421) 00:14:11.369 fused_ordering(422) 00:14:11.369 fused_ordering(423) 00:14:11.369 fused_ordering(424) 00:14:11.369 fused_ordering(425) 00:14:11.369 fused_ordering(426) 00:14:11.369 fused_ordering(427) 00:14:11.369 fused_ordering(428) 00:14:11.369 fused_ordering(429) 00:14:11.369 fused_ordering(430) 00:14:11.369 fused_ordering(431) 00:14:11.369 fused_ordering(432) 00:14:11.369 fused_ordering(433) 00:14:11.369 fused_ordering(434) 00:14:11.369 fused_ordering(435) 00:14:11.369 fused_ordering(436) 00:14:11.369 fused_ordering(437) 00:14:11.369 fused_ordering(438) 00:14:11.369 fused_ordering(439) 00:14:11.369 fused_ordering(440) 00:14:11.369 fused_ordering(441) 00:14:11.369 fused_ordering(442) 00:14:11.369 fused_ordering(443) 00:14:11.369 fused_ordering(444) 00:14:11.369 fused_ordering(445) 00:14:11.369 fused_ordering(446) 00:14:11.369 fused_ordering(447) 00:14:11.369 fused_ordering(448) 00:14:11.369 fused_ordering(449) 00:14:11.369 fused_ordering(450) 00:14:11.369 fused_ordering(451) 00:14:11.369 fused_ordering(452) 00:14:11.369 fused_ordering(453) 00:14:11.369 fused_ordering(454) 00:14:11.369 fused_ordering(455) 00:14:11.369 fused_ordering(456) 00:14:11.369 fused_ordering(457) 00:14:11.369 fused_ordering(458) 00:14:11.369 fused_ordering(459) 00:14:11.369 fused_ordering(460) 00:14:11.369 fused_ordering(461) 00:14:11.369 fused_ordering(462) 00:14:11.369 fused_ordering(463) 00:14:11.369 fused_ordering(464) 00:14:11.369 fused_ordering(465) 00:14:11.369 fused_ordering(466) 00:14:11.369 fused_ordering(467) 00:14:11.369 fused_ordering(468) 00:14:11.369 fused_ordering(469) 00:14:11.369 fused_ordering(470) 00:14:11.369 fused_ordering(471) 00:14:11.369 fused_ordering(472) 00:14:11.369 fused_ordering(473) 00:14:11.369 fused_ordering(474) 00:14:11.369 fused_ordering(475) 00:14:11.369 fused_ordering(476) 00:14:11.369 fused_ordering(477) 00:14:11.369 fused_ordering(478) 00:14:11.369 fused_ordering(479) 00:14:11.369 fused_ordering(480) 00:14:11.369 fused_ordering(481) 00:14:11.369 fused_ordering(482) 00:14:11.369 fused_ordering(483) 00:14:11.369 fused_ordering(484) 00:14:11.369 fused_ordering(485) 00:14:11.369 fused_ordering(486) 00:14:11.369 fused_ordering(487) 00:14:11.369 fused_ordering(488) 00:14:11.369 fused_ordering(489) 00:14:11.369 fused_ordering(490) 00:14:11.369 fused_ordering(491) 00:14:11.369 fused_ordering(492) 00:14:11.369 fused_ordering(493) 00:14:11.369 fused_ordering(494) 00:14:11.369 fused_ordering(495) 00:14:11.369 fused_ordering(496) 00:14:11.369 fused_ordering(497) 00:14:11.369 fused_ordering(498) 00:14:11.369 fused_ordering(499) 00:14:11.369 fused_ordering(500) 00:14:11.369 fused_ordering(501) 00:14:11.369 fused_ordering(502) 00:14:11.369 fused_ordering(503) 00:14:11.369 fused_ordering(504) 00:14:11.369 fused_ordering(505) 00:14:11.369 fused_ordering(506) 00:14:11.369 fused_ordering(507) 00:14:11.369 fused_ordering(508) 00:14:11.369 fused_ordering(509) 00:14:11.369 fused_ordering(510) 00:14:11.369 fused_ordering(511) 00:14:11.369 fused_ordering(512) 00:14:11.369 fused_ordering(513) 00:14:11.369 fused_ordering(514) 00:14:11.369 fused_ordering(515) 00:14:11.369 fused_ordering(516) 00:14:11.369 fused_ordering(517) 00:14:11.369 fused_ordering(518) 00:14:11.369 fused_ordering(519) 00:14:11.369 fused_ordering(520) 00:14:11.369 fused_ordering(521) 00:14:11.369 fused_ordering(522) 00:14:11.369 fused_ordering(523) 00:14:11.369 fused_ordering(524) 00:14:11.369 fused_ordering(525) 00:14:11.369 fused_ordering(526) 00:14:11.369 fused_ordering(527) 00:14:11.369 fused_ordering(528) 00:14:11.369 fused_ordering(529) 00:14:11.369 fused_ordering(530) 00:14:11.369 fused_ordering(531) 00:14:11.369 fused_ordering(532) 00:14:11.369 fused_ordering(533) 00:14:11.369 fused_ordering(534) 00:14:11.369 fused_ordering(535) 00:14:11.369 fused_ordering(536) 00:14:11.369 fused_ordering(537) 00:14:11.369 fused_ordering(538) 00:14:11.369 fused_ordering(539) 00:14:11.369 fused_ordering(540) 00:14:11.369 fused_ordering(541) 00:14:11.369 fused_ordering(542) 00:14:11.369 fused_ordering(543) 00:14:11.369 fused_ordering(544) 00:14:11.369 fused_ordering(545) 00:14:11.369 fused_ordering(546) 00:14:11.369 fused_ordering(547) 00:14:11.369 fused_ordering(548) 00:14:11.369 fused_ordering(549) 00:14:11.369 fused_ordering(550) 00:14:11.369 fused_ordering(551) 00:14:11.369 fused_ordering(552) 00:14:11.369 fused_ordering(553) 00:14:11.369 fused_ordering(554) 00:14:11.369 fused_ordering(555) 00:14:11.369 fused_ordering(556) 00:14:11.369 fused_ordering(557) 00:14:11.369 fused_ordering(558) 00:14:11.369 fused_ordering(559) 00:14:11.369 fused_ordering(560) 00:14:11.369 fused_ordering(561) 00:14:11.369 fused_ordering(562) 00:14:11.369 fused_ordering(563) 00:14:11.369 fused_ordering(564) 00:14:11.369 fused_ordering(565) 00:14:11.369 fused_ordering(566) 00:14:11.369 fused_ordering(567) 00:14:11.369 fused_ordering(568) 00:14:11.369 fused_ordering(569) 00:14:11.369 fused_ordering(570) 00:14:11.369 fused_ordering(571) 00:14:11.369 fused_ordering(572) 00:14:11.369 fused_ordering(573) 00:14:11.369 fused_ordering(574) 00:14:11.369 fused_ordering(575) 00:14:11.369 fused_ordering(576) 00:14:11.369 fused_ordering(577) 00:14:11.369 fused_ordering(578) 00:14:11.369 fused_ordering(579) 00:14:11.369 fused_ordering(580) 00:14:11.369 fused_ordering(581) 00:14:11.369 fused_ordering(582) 00:14:11.369 fused_ordering(583) 00:14:11.369 fused_ordering(584) 00:14:11.369 fused_ordering(585) 00:14:11.369 fused_ordering(586) 00:14:11.369 fused_ordering(587) 00:14:11.369 fused_ordering(588) 00:14:11.369 fused_ordering(589) 00:14:11.369 fused_ordering(590) 00:14:11.369 fused_ordering(591) 00:14:11.369 fused_ordering(592) 00:14:11.369 fused_ordering(593) 00:14:11.369 fused_ordering(594) 00:14:11.369 fused_ordering(595) 00:14:11.369 fused_ordering(596) 00:14:11.369 fused_ordering(597) 00:14:11.369 fused_ordering(598) 00:14:11.369 fused_ordering(599) 00:14:11.369 fused_ordering(600) 00:14:11.369 fused_ordering(601) 00:14:11.369 fused_ordering(602) 00:14:11.369 fused_ordering(603) 00:14:11.369 fused_ordering(604) 00:14:11.369 fused_ordering(605) 00:14:11.369 fused_ordering(606) 00:14:11.369 fused_ordering(607) 00:14:11.369 fused_ordering(608) 00:14:11.369 fused_ordering(609) 00:14:11.369 fused_ordering(610) 00:14:11.369 fused_ordering(611) 00:14:11.369 fused_ordering(612) 00:14:11.369 fused_ordering(613) 00:14:11.369 fused_ordering(614) 00:14:11.369 fused_ordering(615) 00:14:11.941 fused_ordering(616) 00:14:11.941 fused_ordering(617) 00:14:11.941 fused_ordering(618) 00:14:11.941 fused_ordering(619) 00:14:11.941 fused_ordering(620) 00:14:11.941 fused_ordering(621) 00:14:11.941 fused_ordering(622) 00:14:11.941 fused_ordering(623) 00:14:11.941 fused_ordering(624) 00:14:11.941 fused_ordering(625) 00:14:11.941 fused_ordering(626) 00:14:11.941 fused_ordering(627) 00:14:11.941 fused_ordering(628) 00:14:11.941 fused_ordering(629) 00:14:11.941 fused_ordering(630) 00:14:11.941 fused_ordering(631) 00:14:11.941 fused_ordering(632) 00:14:11.941 fused_ordering(633) 00:14:11.941 fused_ordering(634) 00:14:11.941 fused_ordering(635) 00:14:11.941 fused_ordering(636) 00:14:11.941 fused_ordering(637) 00:14:11.941 fused_ordering(638) 00:14:11.941 fused_ordering(639) 00:14:11.941 fused_ordering(640) 00:14:11.941 fused_ordering(641) 00:14:11.941 fused_ordering(642) 00:14:11.941 fused_ordering(643) 00:14:11.941 fused_ordering(644) 00:14:11.941 fused_ordering(645) 00:14:11.941 fused_ordering(646) 00:14:11.941 fused_ordering(647) 00:14:11.941 fused_ordering(648) 00:14:11.941 fused_ordering(649) 00:14:11.941 fused_ordering(650) 00:14:11.941 fused_ordering(651) 00:14:11.941 fused_ordering(652) 00:14:11.941 fused_ordering(653) 00:14:11.941 fused_ordering(654) 00:14:11.941 fused_ordering(655) 00:14:11.941 fused_ordering(656) 00:14:11.941 fused_ordering(657) 00:14:11.941 fused_ordering(658) 00:14:11.941 fused_ordering(659) 00:14:11.941 fused_ordering(660) 00:14:11.941 fused_ordering(661) 00:14:11.941 fused_ordering(662) 00:14:11.941 fused_ordering(663) 00:14:11.941 fused_ordering(664) 00:14:11.941 fused_ordering(665) 00:14:11.941 fused_ordering(666) 00:14:11.941 fused_ordering(667) 00:14:11.941 fused_ordering(668) 00:14:11.941 fused_ordering(669) 00:14:11.941 fused_ordering(670) 00:14:11.941 fused_ordering(671) 00:14:11.941 fused_ordering(672) 00:14:11.941 fused_ordering(673) 00:14:11.941 fused_ordering(674) 00:14:11.941 fused_ordering(675) 00:14:11.941 fused_ordering(676) 00:14:11.941 fused_ordering(677) 00:14:11.941 fused_ordering(678) 00:14:11.941 fused_ordering(679) 00:14:11.941 fused_ordering(680) 00:14:11.941 fused_ordering(681) 00:14:11.941 fused_ordering(682) 00:14:11.941 fused_ordering(683) 00:14:11.941 fused_ordering(684) 00:14:11.941 fused_ordering(685) 00:14:11.941 fused_ordering(686) 00:14:11.941 fused_ordering(687) 00:14:11.941 fused_ordering(688) 00:14:11.941 fused_ordering(689) 00:14:11.941 fused_ordering(690) 00:14:11.941 fused_ordering(691) 00:14:11.941 fused_ordering(692) 00:14:11.941 fused_ordering(693) 00:14:11.941 fused_ordering(694) 00:14:11.941 fused_ordering(695) 00:14:11.941 fused_ordering(696) 00:14:11.941 fused_ordering(697) 00:14:11.941 fused_ordering(698) 00:14:11.941 fused_ordering(699) 00:14:11.941 fused_ordering(700) 00:14:11.941 fused_ordering(701) 00:14:11.941 fused_ordering(702) 00:14:11.941 fused_ordering(703) 00:14:11.941 fused_ordering(704) 00:14:11.941 fused_ordering(705) 00:14:11.941 fused_ordering(706) 00:14:11.941 fused_ordering(707) 00:14:11.941 fused_ordering(708) 00:14:11.941 fused_ordering(709) 00:14:11.941 fused_ordering(710) 00:14:11.941 fused_ordering(711) 00:14:11.941 fused_ordering(712) 00:14:11.941 fused_ordering(713) 00:14:11.941 fused_ordering(714) 00:14:11.941 fused_ordering(715) 00:14:11.941 fused_ordering(716) 00:14:11.941 fused_ordering(717) 00:14:11.941 fused_ordering(718) 00:14:11.941 fused_ordering(719) 00:14:11.941 fused_ordering(720) 00:14:11.941 fused_ordering(721) 00:14:11.941 fused_ordering(722) 00:14:11.941 fused_ordering(723) 00:14:11.941 fused_ordering(724) 00:14:11.941 fused_ordering(725) 00:14:11.941 fused_ordering(726) 00:14:11.941 fused_ordering(727) 00:14:11.941 fused_ordering(728) 00:14:11.941 fused_ordering(729) 00:14:11.941 fused_ordering(730) 00:14:11.941 fused_ordering(731) 00:14:11.941 fused_ordering(732) 00:14:11.941 fused_ordering(733) 00:14:11.941 fused_ordering(734) 00:14:11.941 fused_ordering(735) 00:14:11.941 fused_ordering(736) 00:14:11.941 fused_ordering(737) 00:14:11.941 fused_ordering(738) 00:14:11.941 fused_ordering(739) 00:14:11.941 fused_ordering(740) 00:14:11.941 fused_ordering(741) 00:14:11.941 fused_ordering(742) 00:14:11.941 fused_ordering(743) 00:14:11.941 fused_ordering(744) 00:14:11.941 fused_ordering(745) 00:14:11.941 fused_ordering(746) 00:14:11.941 fused_ordering(747) 00:14:11.941 fused_ordering(748) 00:14:11.941 fused_ordering(749) 00:14:11.941 fused_ordering(750) 00:14:11.941 fused_ordering(751) 00:14:11.941 fused_ordering(752) 00:14:11.941 fused_ordering(753) 00:14:11.941 fused_ordering(754) 00:14:11.941 fused_ordering(755) 00:14:11.941 fused_ordering(756) 00:14:11.941 fused_ordering(757) 00:14:11.941 fused_ordering(758) 00:14:11.941 fused_ordering(759) 00:14:11.941 fused_ordering(760) 00:14:11.941 fused_ordering(761) 00:14:11.941 fused_ordering(762) 00:14:11.941 fused_ordering(763) 00:14:11.941 fused_ordering(764) 00:14:11.941 fused_ordering(765) 00:14:11.941 fused_ordering(766) 00:14:11.941 fused_ordering(767) 00:14:11.941 fused_ordering(768) 00:14:11.941 fused_ordering(769) 00:14:11.941 fused_ordering(770) 00:14:11.941 fused_ordering(771) 00:14:11.941 fused_ordering(772) 00:14:11.941 fused_ordering(773) 00:14:11.941 fused_ordering(774) 00:14:11.941 fused_ordering(775) 00:14:11.941 fused_ordering(776) 00:14:11.941 fused_ordering(777) 00:14:11.941 fused_ordering(778) 00:14:11.941 fused_ordering(779) 00:14:11.941 fused_ordering(780) 00:14:11.941 fused_ordering(781) 00:14:11.941 fused_ordering(782) 00:14:11.941 fused_ordering(783) 00:14:11.941 fused_ordering(784) 00:14:11.941 fused_ordering(785) 00:14:11.941 fused_ordering(786) 00:14:11.941 fused_ordering(787) 00:14:11.941 fused_ordering(788) 00:14:11.941 fused_ordering(789) 00:14:11.941 fused_ordering(790) 00:14:11.942 fused_ordering(791) 00:14:11.942 fused_ordering(792) 00:14:11.942 fused_ordering(793) 00:14:11.942 fused_ordering(794) 00:14:11.942 fused_ordering(795) 00:14:11.942 fused_ordering(796) 00:14:11.942 fused_ordering(797) 00:14:11.942 fused_ordering(798) 00:14:11.942 fused_ordering(799) 00:14:11.942 fused_ordering(800) 00:14:11.942 fused_ordering(801) 00:14:11.942 fused_ordering(802) 00:14:11.942 fused_ordering(803) 00:14:11.942 fused_ordering(804) 00:14:11.942 fused_ordering(805) 00:14:11.942 fused_ordering(806) 00:14:11.942 fused_ordering(807) 00:14:11.942 fused_ordering(808) 00:14:11.942 fused_ordering(809) 00:14:11.942 fused_ordering(810) 00:14:11.942 fused_ordering(811) 00:14:11.942 fused_ordering(812) 00:14:11.942 fused_ordering(813) 00:14:11.942 fused_ordering(814) 00:14:11.942 fused_ordering(815) 00:14:11.942 fused_ordering(816) 00:14:11.942 fused_ordering(817) 00:14:11.942 fused_ordering(818) 00:14:11.942 fused_ordering(819) 00:14:11.942 fused_ordering(820) 00:14:12.513 fused_ordering(821) 00:14:12.513 fused_ordering(822) 00:14:12.513 fused_ordering(823) 00:14:12.513 fused_ordering(824) 00:14:12.513 fused_ordering(825) 00:14:12.513 fused_ordering(826) 00:14:12.513 fused_ordering(827) 00:14:12.513 fused_ordering(828) 00:14:12.513 fused_ordering(829) 00:14:12.513 fused_ordering(830) 00:14:12.513 fused_ordering(831) 00:14:12.513 fused_ordering(832) 00:14:12.513 fused_ordering(833) 00:14:12.513 fused_ordering(834) 00:14:12.513 fused_ordering(835) 00:14:12.513 fused_ordering(836) 00:14:12.513 fused_ordering(837) 00:14:12.513 fused_ordering(838) 00:14:12.513 fused_ordering(839) 00:14:12.513 fused_ordering(840) 00:14:12.513 fused_ordering(841) 00:14:12.513 fused_ordering(842) 00:14:12.513 fused_ordering(843) 00:14:12.513 fused_ordering(844) 00:14:12.513 fused_ordering(845) 00:14:12.513 fused_ordering(846) 00:14:12.513 fused_ordering(847) 00:14:12.513 fused_ordering(848) 00:14:12.513 fused_ordering(849) 00:14:12.513 fused_ordering(850) 00:14:12.513 fused_ordering(851) 00:14:12.513 fused_ordering(852) 00:14:12.513 fused_ordering(853) 00:14:12.513 fused_ordering(854) 00:14:12.513 fused_ordering(855) 00:14:12.513 fused_ordering(856) 00:14:12.513 fused_ordering(857) 00:14:12.513 fused_ordering(858) 00:14:12.513 fused_ordering(859) 00:14:12.513 fused_ordering(860) 00:14:12.513 fused_ordering(861) 00:14:12.513 fused_ordering(862) 00:14:12.513 fused_ordering(863) 00:14:12.513 fused_ordering(864) 00:14:12.513 fused_ordering(865) 00:14:12.513 fused_ordering(866) 00:14:12.513 fused_ordering(867) 00:14:12.513 fused_ordering(868) 00:14:12.513 fused_ordering(869) 00:14:12.513 fused_ordering(870) 00:14:12.513 fused_ordering(871) 00:14:12.513 fused_ordering(872) 00:14:12.513 fused_ordering(873) 00:14:12.513 fused_ordering(874) 00:14:12.513 fused_ordering(875) 00:14:12.513 fused_ordering(876) 00:14:12.513 fused_ordering(877) 00:14:12.514 fused_ordering(878) 00:14:12.514 fused_ordering(879) 00:14:12.514 fused_ordering(880) 00:14:12.514 fused_ordering(881) 00:14:12.514 fused_ordering(882) 00:14:12.514 fused_ordering(883) 00:14:12.514 fused_ordering(884) 00:14:12.514 fused_ordering(885) 00:14:12.514 fused_ordering(886) 00:14:12.514 fused_ordering(887) 00:14:12.514 fused_ordering(888) 00:14:12.514 fused_ordering(889) 00:14:12.514 fused_ordering(890) 00:14:12.514 fused_ordering(891) 00:14:12.514 fused_ordering(892) 00:14:12.514 fused_ordering(893) 00:14:12.514 fused_ordering(894) 00:14:12.514 fused_ordering(895) 00:14:12.514 fused_ordering(896) 00:14:12.514 fused_ordering(897) 00:14:12.514 fused_ordering(898) 00:14:12.514 fused_ordering(899) 00:14:12.514 fused_ordering(900) 00:14:12.514 fused_ordering(901) 00:14:12.514 fused_ordering(902) 00:14:12.514 fused_ordering(903) 00:14:12.514 fused_ordering(904) 00:14:12.514 fused_ordering(905) 00:14:12.514 fused_ordering(906) 00:14:12.514 fused_ordering(907) 00:14:12.514 fused_ordering(908) 00:14:12.514 fused_ordering(909) 00:14:12.514 fused_ordering(910) 00:14:12.514 fused_ordering(911) 00:14:12.514 fused_ordering(912) 00:14:12.514 fused_ordering(913) 00:14:12.514 fused_ordering(914) 00:14:12.514 fused_ordering(915) 00:14:12.514 fused_ordering(916) 00:14:12.514 fused_ordering(917) 00:14:12.514 fused_ordering(918) 00:14:12.514 fused_ordering(919) 00:14:12.514 fused_ordering(920) 00:14:12.514 fused_ordering(921) 00:14:12.514 fused_ordering(922) 00:14:12.514 fused_ordering(923) 00:14:12.514 fused_ordering(924) 00:14:12.514 fused_ordering(925) 00:14:12.514 fused_ordering(926) 00:14:12.514 fused_ordering(927) 00:14:12.514 fused_ordering(928) 00:14:12.514 fused_ordering(929) 00:14:12.514 fused_ordering(930) 00:14:12.514 fused_ordering(931) 00:14:12.514 fused_ordering(932) 00:14:12.514 fused_ordering(933) 00:14:12.514 fused_ordering(934) 00:14:12.514 fused_ordering(935) 00:14:12.514 fused_ordering(936) 00:14:12.514 fused_ordering(937) 00:14:12.514 fused_ordering(938) 00:14:12.514 fused_ordering(939) 00:14:12.514 fused_ordering(940) 00:14:12.514 fused_ordering(941) 00:14:12.514 fused_ordering(942) 00:14:12.514 fused_ordering(943) 00:14:12.514 fused_ordering(944) 00:14:12.514 fused_ordering(945) 00:14:12.514 fused_ordering(946) 00:14:12.514 fused_ordering(947) 00:14:12.514 fused_ordering(948) 00:14:12.514 fused_ordering(949) 00:14:12.514 fused_ordering(950) 00:14:12.514 fused_ordering(951) 00:14:12.514 fused_ordering(952) 00:14:12.514 fused_ordering(953) 00:14:12.514 fused_ordering(954) 00:14:12.514 fused_ordering(955) 00:14:12.514 fused_ordering(956) 00:14:12.514 fused_ordering(957) 00:14:12.514 fused_ordering(958) 00:14:12.514 fused_ordering(959) 00:14:12.514 fused_ordering(960) 00:14:12.514 fused_ordering(961) 00:14:12.514 fused_ordering(962) 00:14:12.514 fused_ordering(963) 00:14:12.514 fused_ordering(964) 00:14:12.514 fused_ordering(965) 00:14:12.514 fused_ordering(966) 00:14:12.514 fused_ordering(967) 00:14:12.514 fused_ordering(968) 00:14:12.514 fused_ordering(969) 00:14:12.514 fused_ordering(970) 00:14:12.514 fused_ordering(971) 00:14:12.514 fused_ordering(972) 00:14:12.514 fused_ordering(973) 00:14:12.514 fused_ordering(974) 00:14:12.514 fused_ordering(975) 00:14:12.514 fused_ordering(976) 00:14:12.514 fused_ordering(977) 00:14:12.514 fused_ordering(978) 00:14:12.514 fused_ordering(979) 00:14:12.514 fused_ordering(980) 00:14:12.514 fused_ordering(981) 00:14:12.514 fused_ordering(982) 00:14:12.514 fused_ordering(983) 00:14:12.514 fused_ordering(984) 00:14:12.514 fused_ordering(985) 00:14:12.514 fused_ordering(986) 00:14:12.514 fused_ordering(987) 00:14:12.514 fused_ordering(988) 00:14:12.514 fused_ordering(989) 00:14:12.514 fused_ordering(990) 00:14:12.514 fused_ordering(991) 00:14:12.514 fused_ordering(992) 00:14:12.514 fused_ordering(993) 00:14:12.514 fused_ordering(994) 00:14:12.514 fused_ordering(995) 00:14:12.514 fused_ordering(996) 00:14:12.514 fused_ordering(997) 00:14:12.514 fused_ordering(998) 00:14:12.514 fused_ordering(999) 00:14:12.514 fused_ordering(1000) 00:14:12.514 fused_ordering(1001) 00:14:12.514 fused_ordering(1002) 00:14:12.514 fused_ordering(1003) 00:14:12.514 fused_ordering(1004) 00:14:12.514 fused_ordering(1005) 00:14:12.514 fused_ordering(1006) 00:14:12.514 fused_ordering(1007) 00:14:12.514 fused_ordering(1008) 00:14:12.514 fused_ordering(1009) 00:14:12.514 fused_ordering(1010) 00:14:12.514 fused_ordering(1011) 00:14:12.514 fused_ordering(1012) 00:14:12.514 fused_ordering(1013) 00:14:12.514 fused_ordering(1014) 00:14:12.514 fused_ordering(1015) 00:14:12.514 fused_ordering(1016) 00:14:12.514 fused_ordering(1017) 00:14:12.514 fused_ordering(1018) 00:14:12.514 fused_ordering(1019) 00:14:12.514 fused_ordering(1020) 00:14:12.514 fused_ordering(1021) 00:14:12.514 fused_ordering(1022) 00:14:12.514 fused_ordering(1023) 00:14:12.514 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:12.514 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:12.514 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:12.514 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:12.514 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:12.514 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:12.514 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:12.514 11:47:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:12.514 rmmod nvme_tcp 00:14:12.514 rmmod nvme_fabrics 00:14:12.514 rmmod nvme_keyring 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 956415 ']' 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 956415 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 956415 ']' 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 956415 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 956415 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 956415' 00:14:12.514 killing process with pid 956415 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 956415 00:14:12.514 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 956415 00:14:12.775 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:12.775 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:12.775 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:12.775 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:12.775 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:14:12.775 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:12.775 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:14:12.775 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.775 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:12.776 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.776 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.776 11:47:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.689 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:14.689 00:14:14.689 real 0m13.515s 00:14:14.689 user 0m7.248s 00:14:14.689 sys 0m7.191s 00:14:14.689 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.689 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.689 ************************************ 00:14:14.689 END TEST nvmf_fused_ordering 00:14:14.689 ************************************ 00:14:14.689 11:47:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:14.689 11:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:14.689 11:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:14.689 11:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:14.951 ************************************ 00:14:14.951 START TEST nvmf_ns_masking 00:14:14.951 ************************************ 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:14.951 * Looking for test storage... 00:14:14.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:14.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.951 --rc genhtml_branch_coverage=1 00:14:14.951 --rc genhtml_function_coverage=1 00:14:14.951 --rc genhtml_legend=1 00:14:14.951 --rc geninfo_all_blocks=1 00:14:14.951 --rc geninfo_unexecuted_blocks=1 00:14:14.951 00:14:14.951 ' 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:14.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.951 --rc genhtml_branch_coverage=1 00:14:14.951 --rc genhtml_function_coverage=1 00:14:14.951 --rc genhtml_legend=1 00:14:14.951 --rc geninfo_all_blocks=1 00:14:14.951 --rc geninfo_unexecuted_blocks=1 00:14:14.951 00:14:14.951 ' 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:14.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.951 --rc genhtml_branch_coverage=1 00:14:14.951 --rc genhtml_function_coverage=1 00:14:14.951 --rc genhtml_legend=1 00:14:14.951 --rc geninfo_all_blocks=1 00:14:14.951 --rc geninfo_unexecuted_blocks=1 00:14:14.951 00:14:14.951 ' 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:14.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.951 --rc genhtml_branch_coverage=1 00:14:14.951 --rc genhtml_function_coverage=1 00:14:14.951 --rc genhtml_legend=1 00:14:14.951 --rc geninfo_all_blocks=1 00:14:14.951 --rc geninfo_unexecuted_blocks=1 00:14:14.951 00:14:14.951 ' 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.951 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.213 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.213 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:15.213 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.213 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.213 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.213 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.213 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.213 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9e2045a0-2772-42d5-8e40-10a86faa5e3c 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=df2a9462-f06a-4ab7-bd29-ead29386cccc 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8ffa2998-fe6a-4952-957f-c7af3340b7bf 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:15.214 11:47:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:23.357 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:23.357 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.357 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:23.358 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:23.358 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:23.358 11:48:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:23.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:14:23.358 00:14:23.358 --- 10.0.0.2 ping statistics --- 00:14:23.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.358 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:14:23.358 00:14:23.358 --- 10.0.0.1 ping statistics --- 00:14:23.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.358 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=961542 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 961542 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 961542 ']' 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.358 11:48:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.358 [2024-10-11 11:48:07.181603] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:23.358 [2024-10-11 11:48:07.181676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.358 [2024-10-11 11:48:07.268603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.358 [2024-10-11 11:48:07.320307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.358 [2024-10-11 11:48:07.320360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.358 [2024-10-11 11:48:07.320368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.358 [2024-10-11 11:48:07.320375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.358 [2024-10-11 11:48:07.320382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.358 [2024-10-11 11:48:07.321135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.620 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.620 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:23.620 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:23.620 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.620 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:23.620 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.620 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:23.620 [2024-10-11 11:48:08.223018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.881 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:23.881 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:23.881 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:23.881 Malloc1 00:14:23.881 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:24.141 Malloc2 00:14:24.141 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:24.402 11:48:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:24.663 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:24.663 [2024-10-11 11:48:09.250246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.663 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:24.663 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8ffa2998-fe6a-4952-957f-c7af3340b7bf -a 10.0.0.2 -s 4420 -i 4 00:14:24.924 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.924 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:24.924 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.924 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:24.924 11:48:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.471 [ 0]:0x1 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d483bda402e0464492b78e1b27463baa 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d483bda402e0464492b78e1b27463baa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.471 [ 0]:0x1 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d483bda402e0464492b78e1b27463baa 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d483bda402e0464492b78e1b27463baa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.471 [ 1]:0x2 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f72803eeb1400a829fcb7eb910ddb9 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f72803eeb1400a829fcb7eb910ddb9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.471 11:48:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.732 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:27.732 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:27.732 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8ffa2998-fe6a-4952-957f-c7af3340b7bf -a 10.0.0.2 -s 4420 -i 4 00:14:27.992 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:27.993 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:27.993 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.993 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:27.993 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:27.993 11:48:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.906 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:30.166 [ 0]:0x2 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f72803eeb1400a829fcb7eb910ddb9 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f72803eeb1400a829fcb7eb910ddb9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.166 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:30.428 [ 0]:0x1 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d483bda402e0464492b78e1b27463baa 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d483bda402e0464492b78e1b27463baa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:30.428 [ 1]:0x2 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:30.428 11:48:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:30.428 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f72803eeb1400a829fcb7eb910ddb9 00:14:30.428 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f72803eeb1400a829fcb7eb910ddb9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.428 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:30.688 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:30.688 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:30.688 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:30.688 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:30.688 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.688 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:30.688 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.688 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:30.688 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:30.689 [ 0]:0x2 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:30.689 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:30.949 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f72803eeb1400a829fcb7eb910ddb9 00:14:30.949 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f72803eeb1400a829fcb7eb910ddb9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.949 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:30.949 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:30.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.949 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:30.949 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:30.949 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8ffa2998-fe6a-4952-957f-c7af3340b7bf -a 10.0.0.2 -s 4420 -i 4 00:14:31.209 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:31.209 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:31.209 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.209 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:31.209 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:31.209 11:48:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:33.117 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:33.118 [ 0]:0x1 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:33.118 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d483bda402e0464492b78e1b27463baa 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d483bda402e0464492b78e1b27463baa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:33.378 [ 1]:0x2 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f72803eeb1400a829fcb7eb910ddb9 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f72803eeb1400a829fcb7eb910ddb9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.378 11:48:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:33.638 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:33.638 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:33.638 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:33.638 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:33.638 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.638 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:33.638 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.638 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:33.638 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:33.639 [ 0]:0x2 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f72803eeb1400a829fcb7eb910ddb9 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f72803eeb1400a829fcb7eb910ddb9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:33.639 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:33.899 [2024-10-11 11:48:18.319407] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:33.899 request: 00:14:33.899 { 00:14:33.899 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.899 "nsid": 2, 00:14:33.899 "host": "nqn.2016-06.io.spdk:host1", 00:14:33.899 "method": "nvmf_ns_remove_host", 00:14:33.899 "req_id": 1 00:14:33.899 } 00:14:33.899 Got JSON-RPC error response 00:14:33.899 response: 00:14:33.899 { 00:14:33.899 "code": -32602, 00:14:33.899 "message": "Invalid parameters" 00:14:33.899 } 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:33.899 [ 0]:0x2 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=84f72803eeb1400a829fcb7eb910ddb9 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 84f72803eeb1400a829fcb7eb910ddb9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:33.899 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=964201 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 964201 /var/tmp/host.sock 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 964201 ']' 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:34.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.160 11:48:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:34.160 [2024-10-11 11:48:18.703234] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:34.160 [2024-10-11 11:48:18.703286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964201 ] 00:14:34.160 [2024-10-11 11:48:18.779928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.501 [2024-10-11 11:48:18.815401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.162 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:35.162 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:35.162 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.162 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:35.445 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9e2045a0-2772-42d5-8e40-10a86faa5e3c 00:14:35.445 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:35.445 11:48:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9E2045A0277242D58E4010A86FAA5E3C -i 00:14:35.445 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid df2a9462-f06a-4ab7-bd29-ead29386cccc 00:14:35.445 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:35.445 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DF2A9462F06A4AB7BD29EAD29386CCCC -i 00:14:35.731 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:36.014 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:36.014 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:36.014 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:36.603 nvme0n1 00:14:36.603 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:36.603 11:48:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:36.603 nvme1n2 00:14:36.863 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:36.863 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:36.863 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:36.863 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:36.863 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:36.863 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:36.863 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:36.863 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:36.863 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:37.125 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9e2045a0-2772-42d5-8e40-10a86faa5e3c == \9\e\2\0\4\5\a\0\-\2\7\7\2\-\4\2\d\5\-\8\e\4\0\-\1\0\a\8\6\f\a\a\5\e\3\c ]] 00:14:37.125 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:37.125 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:37.125 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ df2a9462-f06a-4ab7-bd29-ead29386cccc == \d\f\2\a\9\4\6\2\-\f\0\6\a\-\4\a\b\7\-\b\d\2\9\-\e\a\d\2\9\3\8\6\c\c\c\c ]] 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 964201 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 964201 ']' 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 964201 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 964201 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 964201' 00:14:37.384 killing process with pid 964201 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 964201 00:14:37.384 11:48:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 964201 00:14:37.644 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.644 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:37.644 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:37.644 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:37.644 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:37.644 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:37.644 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:37.644 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:37.644 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:37.644 rmmod nvme_tcp 00:14:37.644 rmmod nvme_fabrics 00:14:37.905 rmmod nvme_keyring 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 961542 ']' 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 961542 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 961542 ']' 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 961542 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 961542 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 961542' 00:14:37.905 killing process with pid 961542 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 961542 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 961542 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.905 11:48:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:40.448 00:14:40.448 real 0m25.240s 00:14:40.448 user 0m25.677s 00:14:40.448 sys 0m7.893s 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.448 ************************************ 00:14:40.448 END TEST nvmf_ns_masking 00:14:40.448 ************************************ 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.448 ************************************ 00:14:40.448 START TEST nvmf_nvme_cli 00:14:40.448 ************************************ 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:40.448 * Looking for test storage... 00:14:40.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.448 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:40.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.449 --rc genhtml_branch_coverage=1 00:14:40.449 --rc genhtml_function_coverage=1 00:14:40.449 --rc genhtml_legend=1 00:14:40.449 --rc geninfo_all_blocks=1 00:14:40.449 --rc geninfo_unexecuted_blocks=1 00:14:40.449 00:14:40.449 ' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:40.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.449 --rc genhtml_branch_coverage=1 00:14:40.449 --rc genhtml_function_coverage=1 00:14:40.449 --rc genhtml_legend=1 00:14:40.449 --rc geninfo_all_blocks=1 00:14:40.449 --rc geninfo_unexecuted_blocks=1 00:14:40.449 00:14:40.449 ' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:40.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.449 --rc genhtml_branch_coverage=1 00:14:40.449 --rc genhtml_function_coverage=1 00:14:40.449 --rc genhtml_legend=1 00:14:40.449 --rc geninfo_all_blocks=1 00:14:40.449 --rc geninfo_unexecuted_blocks=1 00:14:40.449 00:14:40.449 ' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:40.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.449 --rc genhtml_branch_coverage=1 00:14:40.449 --rc genhtml_function_coverage=1 00:14:40.449 --rc genhtml_legend=1 00:14:40.449 --rc geninfo_all_blocks=1 00:14:40.449 --rc geninfo_unexecuted_blocks=1 00:14:40.449 00:14:40.449 ' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:40.449 11:48:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:48.593 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:48.593 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:48.593 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:48.593 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.593 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:48.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:14:48.594 00:14:48.594 --- 10.0.0.2 ping statistics --- 00:14:48.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.594 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:14:48.594 00:14:48.594 --- 10.0.0.1 ping statistics --- 00:14:48.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.594 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=969230 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 969230 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 969230 ']' 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.594 11:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.594 [2024-10-11 11:48:32.444595] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:48.594 [2024-10-11 11:48:32.444661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.594 [2024-10-11 11:48:32.532258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.594 [2024-10-11 11:48:32.586377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.594 [2024-10-11 11:48:32.586430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.594 [2024-10-11 11:48:32.586439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.594 [2024-10-11 11:48:32.586446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.594 [2024-10-11 11:48:32.586452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.594 [2024-10-11 11:48:32.588509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.594 [2024-10-11 11:48:32.588722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.594 [2024-10-11 11:48:32.588836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.594 [2024-10-11 11:48:32.588934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.856 [2024-10-11 11:48:33.329295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.856 Malloc0 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.856 Malloc1 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.856 [2024-10-11 11:48:33.444351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.856 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:49.117 00:14:49.117 Discovery Log Number of Records 2, Generation counter 2 00:14:49.117 =====Discovery Log Entry 0====== 00:14:49.117 trtype: tcp 00:14:49.117 adrfam: ipv4 00:14:49.117 subtype: current discovery subsystem 00:14:49.117 treq: not required 00:14:49.117 portid: 0 00:14:49.117 trsvcid: 4420 00:14:49.117 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:49.117 traddr: 10.0.0.2 00:14:49.117 eflags: explicit discovery connections, duplicate discovery information 00:14:49.117 sectype: none 00:14:49.117 =====Discovery Log Entry 1====== 00:14:49.117 trtype: tcp 00:14:49.117 adrfam: ipv4 00:14:49.117 subtype: nvme subsystem 00:14:49.117 treq: not required 00:14:49.117 portid: 0 00:14:49.117 trsvcid: 4420 00:14:49.117 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:49.117 traddr: 10.0.0.2 00:14:49.117 eflags: none 00:14:49.117 sectype: none 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:49.117 11:48:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.030 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:51.030 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:51.030 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.030 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:51.030 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:51.030 11:48:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:52.940 /dev/nvme0n2 ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.940 rmmod nvme_tcp 00:14:52.940 rmmod nvme_fabrics 00:14:52.940 rmmod nvme_keyring 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 969230 ']' 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 969230 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 969230 ']' 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 969230 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 969230 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 969230' 00:14:52.940 killing process with pid 969230 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 969230 00:14:52.940 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 969230 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.201 11:48:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.112 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:55.112 00:14:55.112 real 0m15.033s 00:14:55.112 user 0m22.374s 00:14:55.112 sys 0m6.265s 00:14:55.112 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.112 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.112 ************************************ 00:14:55.112 END TEST nvmf_nvme_cli 00:14:55.112 ************************************ 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.374 ************************************ 00:14:55.374 START TEST nvmf_vfio_user 00:14:55.374 ************************************ 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:55.374 * Looking for test storage... 00:14:55.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:55.374 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:55.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.375 --rc genhtml_branch_coverage=1 00:14:55.375 --rc genhtml_function_coverage=1 00:14:55.375 --rc genhtml_legend=1 00:14:55.375 --rc geninfo_all_blocks=1 00:14:55.375 --rc geninfo_unexecuted_blocks=1 00:14:55.375 00:14:55.375 ' 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:55.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.375 --rc genhtml_branch_coverage=1 00:14:55.375 --rc genhtml_function_coverage=1 00:14:55.375 --rc genhtml_legend=1 00:14:55.375 --rc geninfo_all_blocks=1 00:14:55.375 --rc geninfo_unexecuted_blocks=1 00:14:55.375 00:14:55.375 ' 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:55.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.375 --rc genhtml_branch_coverage=1 00:14:55.375 --rc genhtml_function_coverage=1 00:14:55.375 --rc genhtml_legend=1 00:14:55.375 --rc geninfo_all_blocks=1 00:14:55.375 --rc geninfo_unexecuted_blocks=1 00:14:55.375 00:14:55.375 ' 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:55.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.375 --rc genhtml_branch_coverage=1 00:14:55.375 --rc genhtml_function_coverage=1 00:14:55.375 --rc genhtml_legend=1 00:14:55.375 --rc geninfo_all_blocks=1 00:14:55.375 --rc geninfo_unexecuted_blocks=1 00:14:55.375 00:14:55.375 ' 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.375 11:48:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:55.375 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.375 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.375 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.375 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.375 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.375 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.375 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:55.637 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=970816 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 970816' 00:14:55.638 Process pid: 970816 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 970816 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 970816 ']' 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.638 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:55.638 [2024-10-11 11:48:40.096007] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:55.638 [2024-10-11 11:48:40.096082] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.638 [2024-10-11 11:48:40.174461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.638 [2024-10-11 11:48:40.215467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.638 [2024-10-11 11:48:40.215507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.638 [2024-10-11 11:48:40.215513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.638 [2024-10-11 11:48:40.215519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.638 [2024-10-11 11:48:40.215524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.638 [2024-10-11 11:48:40.217273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.638 [2024-10-11 11:48:40.217433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.638 [2024-10-11 11:48:40.217591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.638 [2024-10-11 11:48:40.217593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.577 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.577 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:56.577 11:48:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:57.519 11:48:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:57.519 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:57.519 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:57.519 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:57.519 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:57.519 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:57.778 Malloc1 00:14:57.778 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:58.038 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:58.038 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:58.297 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.297 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:58.297 11:48:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:58.557 Malloc2 00:14:58.557 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:58.819 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:58.819 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:59.090 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:59.090 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:59.090 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.090 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:59.090 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:59.090 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:59.090 [2024-10-11 11:48:43.611772] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:14:59.090 [2024-10-11 11:48:43.611816] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid971545 ] 00:14:59.090 [2024-10-11 11:48:43.640778] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:59.090 [2024-10-11 11:48:43.650977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.090 [2024-10-11 11:48:43.650993] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa243707000 00:14:59.090 [2024-10-11 11:48:43.651986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.090 [2024-10-11 11:48:43.652983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.090 [2024-10-11 11:48:43.653982] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.090 [2024-10-11 11:48:43.654996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.090 [2024-10-11 11:48:43.655996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.090 [2024-10-11 11:48:43.657002] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.090 [2024-10-11 11:48:43.658004] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.090 [2024-10-11 11:48:43.659015] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.090 [2024-10-11 11:48:43.660018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.090 [2024-10-11 11:48:43.660029] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa2436fc000 00:14:59.090 [2024-10-11 11:48:43.660945] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.090 [2024-10-11 11:48:43.673390] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:59.090 [2024-10-11 11:48:43.673412] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:59.090 [2024-10-11 11:48:43.676121] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:59.090 [2024-10-11 11:48:43.676151] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:59.090 [2024-10-11 11:48:43.676215] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:59.090 [2024-10-11 11:48:43.676227] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:59.090 [2024-10-11 11:48:43.676231] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:59.090 [2024-10-11 11:48:43.677122] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:59.090 [2024-10-11 11:48:43.677129] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:59.090 [2024-10-11 11:48:43.677134] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:59.090 [2024-10-11 11:48:43.678127] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:59.090 [2024-10-11 11:48:43.678133] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:59.090 [2024-10-11 11:48:43.678139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:59.090 [2024-10-11 11:48:43.679133] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:59.090 [2024-10-11 11:48:43.679139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:59.090 [2024-10-11 11:48:43.680136] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:59.090 [2024-10-11 11:48:43.680141] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:59.090 [2024-10-11 11:48:43.680145] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:59.090 [2024-10-11 11:48:43.680149] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:59.090 [2024-10-11 11:48:43.680254] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:59.090 [2024-10-11 11:48:43.680257] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:59.090 [2024-10-11 11:48:43.680263] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:59.090 [2024-10-11 11:48:43.681132] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:59.090 [2024-10-11 11:48:43.682142] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:59.090 [2024-10-11 11:48:43.683142] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:59.090 [2024-10-11 11:48:43.684151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.090 [2024-10-11 11:48:43.684207] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:59.090 [2024-10-11 11:48:43.685164] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:59.090 [2024-10-11 11:48:43.685170] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:59.090 [2024-10-11 11:48:43.685173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:59.090 [2024-10-11 11:48:43.685188] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:59.090 [2024-10-11 11:48:43.685194] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:59.090 [2024-10-11 11:48:43.685206] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.090 [2024-10-11 11:48:43.685209] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.090 [2024-10-11 11:48:43.685212] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.090 [2024-10-11 11:48:43.685222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.090 [2024-10-11 11:48:43.685256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:59.090 [2024-10-11 11:48:43.685263] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:59.090 [2024-10-11 11:48:43.685266] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:59.090 [2024-10-11 11:48:43.685269] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:59.090 [2024-10-11 11:48:43.685273] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:59.090 [2024-10-11 11:48:43.685276] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:59.090 [2024-10-11 11:48:43.685280] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:59.090 [2024-10-11 11:48:43.685283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:59.090 [2024-10-11 11:48:43.685288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:59.090 [2024-10-11 11:48:43.685296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:59.090 [2024-10-11 11:48:43.685307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:59.090 [2024-10-11 11:48:43.685315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.091 [2024-10-11 11:48:43.685321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.091 [2024-10-11 11:48:43.685327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.091 [2024-10-11 11:48:43.685334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.091 [2024-10-11 11:48:43.685337] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685343] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685362] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:59.091 [2024-10-11 11:48:43.685365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685375] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685433] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685438] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685444] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:59.091 [2024-10-11 11:48:43.685447] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:59.091 [2024-10-11 11:48:43.685450] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.091 [2024-10-11 11:48:43.685454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685470] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:59.091 [2024-10-11 11:48:43.685479] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685484] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685491] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.091 [2024-10-11 11:48:43.685495] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.091 [2024-10-11 11:48:43.685497] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.091 [2024-10-11 11:48:43.685501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685531] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685536] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685541] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.091 [2024-10-11 11:48:43.685544] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.091 [2024-10-11 11:48:43.685547] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.091 [2024-10-11 11:48:43.685551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685568] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685573] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685579] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685587] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685594] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:59.091 [2024-10-11 11:48:43.685597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:59.091 [2024-10-11 11:48:43.685601] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:59.091 [2024-10-11 11:48:43.685615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685690] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:59.091 [2024-10-11 11:48:43.685693] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:59.091 [2024-10-11 11:48:43.685695] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:59.091 [2024-10-11 11:48:43.685698] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:59.091 [2024-10-11 11:48:43.685700] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:59.091 [2024-10-11 11:48:43.685705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:59.091 [2024-10-11 11:48:43.685710] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:59.091 [2024-10-11 11:48:43.685713] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:59.091 [2024-10-11 11:48:43.685716] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.091 [2024-10-11 11:48:43.685720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685726] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:59.091 [2024-10-11 11:48:43.685729] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.091 [2024-10-11 11:48:43.685732] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.091 [2024-10-11 11:48:43.685736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685742] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:59.091 [2024-10-11 11:48:43.685746] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:59.091 [2024-10-11 11:48:43.685749] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.091 [2024-10-11 11:48:43.685753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:59.091 [2024-10-11 11:48:43.685758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:59.091 [2024-10-11 11:48:43.685780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:59.091 ===================================================== 00:14:59.091 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:59.091 ===================================================== 00:14:59.091 Controller Capabilities/Features 00:14:59.091 ================================ 00:14:59.091 Vendor ID: 4e58 00:14:59.091 Subsystem Vendor ID: 4e58 00:14:59.091 Serial Number: SPDK1 00:14:59.091 Model Number: SPDK bdev Controller 00:14:59.091 Firmware Version: 25.01 00:14:59.091 Recommended Arb Burst: 6 00:14:59.091 IEEE OUI Identifier: 8d 6b 50 00:14:59.091 Multi-path I/O 00:14:59.091 May have multiple subsystem ports: Yes 00:14:59.091 May have multiple controllers: Yes 00:14:59.091 Associated with SR-IOV VF: No 00:14:59.091 Max Data Transfer Size: 131072 00:14:59.091 Max Number of Namespaces: 32 00:14:59.091 Max Number of I/O Queues: 127 00:14:59.091 NVMe Specification Version (VS): 1.3 00:14:59.091 NVMe Specification Version (Identify): 1.3 00:14:59.091 Maximum Queue Entries: 256 00:14:59.091 Contiguous Queues Required: Yes 00:14:59.091 Arbitration Mechanisms Supported 00:14:59.091 Weighted Round Robin: Not Supported 00:14:59.091 Vendor Specific: Not Supported 00:14:59.091 Reset Timeout: 15000 ms 00:14:59.091 Doorbell Stride: 4 bytes 00:14:59.091 NVM Subsystem Reset: Not Supported 00:14:59.091 Command Sets Supported 00:14:59.091 NVM Command Set: Supported 00:14:59.091 Boot Partition: Not Supported 00:14:59.092 Memory Page Size Minimum: 4096 bytes 00:14:59.092 Memory Page Size Maximum: 4096 bytes 00:14:59.092 Persistent Memory Region: Not Supported 00:14:59.092 Optional Asynchronous Events Supported 00:14:59.092 Namespace Attribute Notices: Supported 00:14:59.092 Firmware Activation Notices: Not Supported 00:14:59.092 ANA Change Notices: Not Supported 00:14:59.092 PLE Aggregate Log Change Notices: Not Supported 00:14:59.092 LBA Status Info Alert Notices: Not Supported 00:14:59.092 EGE Aggregate Log Change Notices: Not Supported 00:14:59.092 Normal NVM Subsystem Shutdown event: Not Supported 00:14:59.092 Zone Descriptor Change Notices: Not Supported 00:14:59.092 Discovery Log Change Notices: Not Supported 00:14:59.092 Controller Attributes 00:14:59.092 128-bit Host Identifier: Supported 00:14:59.092 Non-Operational Permissive Mode: Not Supported 00:14:59.092 NVM Sets: Not Supported 00:14:59.092 Read Recovery Levels: Not Supported 00:14:59.092 Endurance Groups: Not Supported 00:14:59.092 Predictable Latency Mode: Not Supported 00:14:59.092 Traffic Based Keep ALive: Not Supported 00:14:59.092 Namespace Granularity: Not Supported 00:14:59.092 SQ Associations: Not Supported 00:14:59.092 UUID List: Not Supported 00:14:59.092 Multi-Domain Subsystem: Not Supported 00:14:59.092 Fixed Capacity Management: Not Supported 00:14:59.092 Variable Capacity Management: Not Supported 00:14:59.092 Delete Endurance Group: Not Supported 00:14:59.092 Delete NVM Set: Not Supported 00:14:59.092 Extended LBA Formats Supported: Not Supported 00:14:59.092 Flexible Data Placement Supported: Not Supported 00:14:59.092 00:14:59.092 Controller Memory Buffer Support 00:14:59.092 ================================ 00:14:59.092 Supported: No 00:14:59.092 00:14:59.092 Persistent Memory Region Support 00:14:59.092 ================================ 00:14:59.092 Supported: No 00:14:59.092 00:14:59.092 Admin Command Set Attributes 00:14:59.092 ============================ 00:14:59.092 Security Send/Receive: Not Supported 00:14:59.092 Format NVM: Not Supported 00:14:59.092 Firmware Activate/Download: Not Supported 00:14:59.092 Namespace Management: Not Supported 00:14:59.092 Device Self-Test: Not Supported 00:14:59.092 Directives: Not Supported 00:14:59.092 NVMe-MI: Not Supported 00:14:59.092 Virtualization Management: Not Supported 00:14:59.092 Doorbell Buffer Config: Not Supported 00:14:59.092 Get LBA Status Capability: Not Supported 00:14:59.092 Command & Feature Lockdown Capability: Not Supported 00:14:59.092 Abort Command Limit: 4 00:14:59.092 Async Event Request Limit: 4 00:14:59.092 Number of Firmware Slots: N/A 00:14:59.092 Firmware Slot 1 Read-Only: N/A 00:14:59.092 Firmware Activation Without Reset: N/A 00:14:59.092 Multiple Update Detection Support: N/A 00:14:59.092 Firmware Update Granularity: No Information Provided 00:14:59.092 Per-Namespace SMART Log: No 00:14:59.092 Asymmetric Namespace Access Log Page: Not Supported 00:14:59.092 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:59.092 Command Effects Log Page: Supported 00:14:59.092 Get Log Page Extended Data: Supported 00:14:59.092 Telemetry Log Pages: Not Supported 00:14:59.092 Persistent Event Log Pages: Not Supported 00:14:59.092 Supported Log Pages Log Page: May Support 00:14:59.092 Commands Supported & Effects Log Page: Not Supported 00:14:59.092 Feature Identifiers & Effects Log Page:May Support 00:14:59.092 NVMe-MI Commands & Effects Log Page: May Support 00:14:59.092 Data Area 4 for Telemetry Log: Not Supported 00:14:59.092 Error Log Page Entries Supported: 128 00:14:59.092 Keep Alive: Supported 00:14:59.092 Keep Alive Granularity: 10000 ms 00:14:59.092 00:14:59.092 NVM Command Set Attributes 00:14:59.092 ========================== 00:14:59.092 Submission Queue Entry Size 00:14:59.092 Max: 64 00:14:59.092 Min: 64 00:14:59.092 Completion Queue Entry Size 00:14:59.092 Max: 16 00:14:59.092 Min: 16 00:14:59.092 Number of Namespaces: 32 00:14:59.092 Compare Command: Supported 00:14:59.092 Write Uncorrectable Command: Not Supported 00:14:59.092 Dataset Management Command: Supported 00:14:59.092 Write Zeroes Command: Supported 00:14:59.092 Set Features Save Field: Not Supported 00:14:59.092 Reservations: Not Supported 00:14:59.092 Timestamp: Not Supported 00:14:59.092 Copy: Supported 00:14:59.092 Volatile Write Cache: Present 00:14:59.092 Atomic Write Unit (Normal): 1 00:14:59.092 Atomic Write Unit (PFail): 1 00:14:59.092 Atomic Compare & Write Unit: 1 00:14:59.092 Fused Compare & Write: Supported 00:14:59.092 Scatter-Gather List 00:14:59.092 SGL Command Set: Supported (Dword aligned) 00:14:59.092 SGL Keyed: Not Supported 00:14:59.092 SGL Bit Bucket Descriptor: Not Supported 00:14:59.092 SGL Metadata Pointer: Not Supported 00:14:59.092 Oversized SGL: Not Supported 00:14:59.092 SGL Metadata Address: Not Supported 00:14:59.092 SGL Offset: Not Supported 00:14:59.092 Transport SGL Data Block: Not Supported 00:14:59.092 Replay Protected Memory Block: Not Supported 00:14:59.092 00:14:59.092 Firmware Slot Information 00:14:59.092 ========================= 00:14:59.092 Active slot: 1 00:14:59.092 Slot 1 Firmware Revision: 25.01 00:14:59.092 00:14:59.092 00:14:59.092 Commands Supported and Effects 00:14:59.092 ============================== 00:14:59.092 Admin Commands 00:14:59.092 -------------- 00:14:59.092 Get Log Page (02h): Supported 00:14:59.092 Identify (06h): Supported 00:14:59.092 Abort (08h): Supported 00:14:59.092 Set Features (09h): Supported 00:14:59.092 Get Features (0Ah): Supported 00:14:59.092 Asynchronous Event Request (0Ch): Supported 00:14:59.092 Keep Alive (18h): Supported 00:14:59.092 I/O Commands 00:14:59.092 ------------ 00:14:59.092 Flush (00h): Supported LBA-Change 00:14:59.092 Write (01h): Supported LBA-Change 00:14:59.092 Read (02h): Supported 00:14:59.092 Compare (05h): Supported 00:14:59.092 Write Zeroes (08h): Supported LBA-Change 00:14:59.092 Dataset Management (09h): Supported LBA-Change 00:14:59.092 Copy (19h): Supported LBA-Change 00:14:59.092 00:14:59.092 Error Log 00:14:59.092 ========= 00:14:59.092 00:14:59.092 Arbitration 00:14:59.092 =========== 00:14:59.092 Arbitration Burst: 1 00:14:59.092 00:14:59.092 Power Management 00:14:59.092 ================ 00:14:59.092 Number of Power States: 1 00:14:59.092 Current Power State: Power State #0 00:14:59.092 Power State #0: 00:14:59.092 Max Power: 0.00 W 00:14:59.092 Non-Operational State: Operational 00:14:59.092 Entry Latency: Not Reported 00:14:59.092 Exit Latency: Not Reported 00:14:59.092 Relative Read Throughput: 0 00:14:59.092 Relative Read Latency: 0 00:14:59.092 Relative Write Throughput: 0 00:14:59.092 Relative Write Latency: 0 00:14:59.092 Idle Power: Not Reported 00:14:59.092 Active Power: Not Reported 00:14:59.092 Non-Operational Permissive Mode: Not Supported 00:14:59.092 00:14:59.092 Health Information 00:14:59.092 ================== 00:14:59.092 Critical Warnings: 00:14:59.092 Available Spare Space: OK 00:14:59.092 Temperature: OK 00:14:59.092 Device Reliability: OK 00:14:59.092 Read Only: No 00:14:59.092 Volatile Memory Backup: OK 00:14:59.092 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:59.092 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:59.092 Available Spare: 0% 00:14:59.092 Available Sp[2024-10-11 11:48:43.685857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:59.092 [2024-10-11 11:48:43.685869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:59.092 [2024-10-11 11:48:43.685890] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:59.092 [2024-10-11 11:48:43.685897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.092 [2024-10-11 11:48:43.685903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.092 [2024-10-11 11:48:43.685907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.092 [2024-10-11 11:48:43.685912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.092 [2024-10-11 11:48:43.686171] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:59.092 [2024-10-11 11:48:43.686180] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:59.092 [2024-10-11 11:48:43.687166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.092 [2024-10-11 11:48:43.687206] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:59.092 [2024-10-11 11:48:43.687211] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:59.092 [2024-10-11 11:48:43.688177] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:59.092 [2024-10-11 11:48:43.688186] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:59.092 [2024-10-11 11:48:43.688239] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:59.092 [2024-10-11 11:48:43.691674] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.092 are Threshold: 0% 00:14:59.092 Life Percentage Used: 0% 00:14:59.092 Data Units Read: 0 00:14:59.092 Data Units Written: 0 00:14:59.092 Host Read Commands: 0 00:14:59.092 Host Write Commands: 0 00:14:59.093 Controller Busy Time: 0 minutes 00:14:59.093 Power Cycles: 0 00:14:59.093 Power On Hours: 0 hours 00:14:59.093 Unsafe Shutdowns: 0 00:14:59.093 Unrecoverable Media Errors: 0 00:14:59.093 Lifetime Error Log Entries: 0 00:14:59.093 Warning Temperature Time: 0 minutes 00:14:59.093 Critical Temperature Time: 0 minutes 00:14:59.093 00:14:59.093 Number of Queues 00:14:59.093 ================ 00:14:59.093 Number of I/O Submission Queues: 127 00:14:59.093 Number of I/O Completion Queues: 127 00:14:59.093 00:14:59.093 Active Namespaces 00:14:59.093 ================= 00:14:59.093 Namespace ID:1 00:14:59.093 Error Recovery Timeout: Unlimited 00:14:59.093 Command Set Identifier: NVM (00h) 00:14:59.093 Deallocate: Supported 00:14:59.093 Deallocated/Unwritten Error: Not Supported 00:14:59.093 Deallocated Read Value: Unknown 00:14:59.093 Deallocate in Write Zeroes: Not Supported 00:14:59.093 Deallocated Guard Field: 0xFFFF 00:14:59.093 Flush: Supported 00:14:59.093 Reservation: Supported 00:14:59.093 Namespace Sharing Capabilities: Multiple Controllers 00:14:59.093 Size (in LBAs): 131072 (0GiB) 00:14:59.093 Capacity (in LBAs): 131072 (0GiB) 00:14:59.093 Utilization (in LBAs): 131072 (0GiB) 00:14:59.093 NGUID: 8F3E22FEDEE64510BA68F29B0B3A1F27 00:14:59.093 UUID: 8f3e22fe-dee6-4510-ba68-f29b0b3a1f27 00:14:59.093 Thin Provisioning: Not Supported 00:14:59.093 Per-NS Atomic Units: Yes 00:14:59.093 Atomic Boundary Size (Normal): 0 00:14:59.093 Atomic Boundary Size (PFail): 0 00:14:59.093 Atomic Boundary Offset: 0 00:14:59.093 Maximum Single Source Range Length: 65535 00:14:59.093 Maximum Copy Length: 65535 00:14:59.093 Maximum Source Range Count: 1 00:14:59.093 NGUID/EUI64 Never Reused: No 00:14:59.093 Namespace Write Protected: No 00:14:59.093 Number of LBA Formats: 1 00:14:59.093 Current LBA Format: LBA Format #00 00:14:59.093 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:59.093 00:14:59.354 11:48:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:59.354 [2024-10-11 11:48:43.869328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.695 Initializing NVMe Controllers 00:15:04.695 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.695 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:04.695 Initialization complete. Launching workers. 00:15:04.695 ======================================================== 00:15:04.695 Latency(us) 00:15:04.695 Device Information : IOPS MiB/s Average min max 00:15:04.695 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40007.01 156.28 3199.12 855.06 8757.50 00:15:04.695 ======================================================== 00:15:04.695 Total : 40007.01 156.28 3199.12 855.06 8757.50 00:15:04.695 00:15:04.695 [2024-10-11 11:48:48.889017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.695 11:48:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:04.695 [2024-10-11 11:48:49.060821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.978 Initializing NVMe Controllers 00:15:09.978 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.978 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:09.978 Initialization complete. Launching workers. 00:15:09.979 ======================================================== 00:15:09.979 Latency(us) 00:15:09.979 Device Information : IOPS MiB/s Average min max 00:15:09.979 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7972.76 4986.82 10974.62 00:15:09.979 ======================================================== 00:15:09.979 Total : 16076.80 62.80 7972.76 4986.82 10974.62 00:15:09.979 00:15:09.979 [2024-10-11 11:48:54.099905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.979 11:48:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:09.979 [2024-10-11 11:48:54.293772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.261 [2024-10-11 11:48:59.393028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.261 Initializing NVMe Controllers 00:15:15.261 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:15.261 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:15.261 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:15.261 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:15.261 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:15.261 Initialization complete. Launching workers. 00:15:15.261 Starting thread on core 2 00:15:15.261 Starting thread on core 3 00:15:15.261 Starting thread on core 1 00:15:15.261 11:48:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:15.261 [2024-10-11 11:48:59.631064] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.556 [2024-10-11 11:49:02.686839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.556 Initializing NVMe Controllers 00:15:18.556 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.556 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.556 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:18.556 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:18.556 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:18.556 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:18.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:18.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:18.556 Initialization complete. Launching workers. 00:15:18.556 Starting thread on core 1 with urgent priority queue 00:15:18.556 Starting thread on core 2 with urgent priority queue 00:15:18.556 Starting thread on core 3 with urgent priority queue 00:15:18.556 Starting thread on core 0 with urgent priority queue 00:15:18.556 SPDK bdev Controller (SPDK1 ) core 0: 9654.67 IO/s 10.36 secs/100000 ios 00:15:18.556 SPDK bdev Controller (SPDK1 ) core 1: 11327.67 IO/s 8.83 secs/100000 ios 00:15:18.556 SPDK bdev Controller (SPDK1 ) core 2: 11606.67 IO/s 8.62 secs/100000 ios 00:15:18.556 SPDK bdev Controller (SPDK1 ) core 3: 13631.00 IO/s 7.34 secs/100000 ios 00:15:18.556 ======================================================== 00:15:18.556 00:15:18.556 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:18.556 [2024-10-11 11:49:02.907627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.556 Initializing NVMe Controllers 00:15:18.556 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.556 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:18.556 Namespace ID: 1 size: 0GB 00:15:18.556 Initialization complete. 00:15:18.556 INFO: using host memory buffer for IO 00:15:18.556 Hello world! 00:15:18.556 [2024-10-11 11:49:02.939829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.556 11:49:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:18.556 [2024-10-11 11:49:03.159228] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.936 Initializing NVMe Controllers 00:15:19.937 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.937 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.937 Initialization complete. Launching workers. 00:15:19.937 submit (in ns) avg, min, max = 6772.0, 2863.3, 4002275.0 00:15:19.937 complete (in ns) avg, min, max = 16140.5, 1634.2, 4001263.3 00:15:19.937 00:15:19.937 Submit histogram 00:15:19.937 ================ 00:15:19.937 Range in us Cumulative Count 00:15:19.937 2.853 - 2.867: 0.0099% ( 2) 00:15:19.937 2.867 - 2.880: 0.1583% ( 30) 00:15:19.937 2.880 - 2.893: 1.2416% ( 219) 00:15:19.937 2.893 - 2.907: 3.8831% ( 534) 00:15:19.937 2.907 - 2.920: 8.3201% ( 897) 00:15:19.937 2.920 - 2.933: 13.6179% ( 1071) 00:15:19.937 2.933 - 2.947: 19.5489% ( 1199) 00:15:19.937 2.947 - 2.960: 25.8657% ( 1277) 00:15:19.937 2.960 - 2.973: 31.2574% ( 1090) 00:15:19.937 2.973 - 2.987: 37.4110% ( 1244) 00:15:19.937 2.987 - 3.000: 42.8275% ( 1095) 00:15:19.937 3.000 - 3.013: 48.2093% ( 1088) 00:15:19.937 3.013 - 3.027: 54.5360% ( 1279) 00:15:19.937 3.027 - 3.040: 62.2329% ( 1556) 00:15:19.937 3.040 - 3.053: 71.8045% ( 1935) 00:15:19.937 3.053 - 3.067: 80.4017% ( 1738) 00:15:19.937 3.067 - 3.080: 87.0499% ( 1344) 00:15:19.937 3.080 - 3.093: 92.6791% ( 1138) 00:15:19.937 3.093 - 3.107: 96.2851% ( 729) 00:15:19.937 3.107 - 3.120: 98.3726% ( 422) 00:15:19.937 3.120 - 3.133: 99.2432% ( 176) 00:15:19.937 3.133 - 3.147: 99.4905% ( 50) 00:15:19.937 3.147 - 3.160: 99.5598% ( 14) 00:15:19.937 3.160 - 3.173: 99.5746% ( 3) 00:15:19.937 3.173 - 3.187: 99.5795% ( 1) 00:15:19.937 3.227 - 3.240: 99.5845% ( 1) 00:15:19.937 3.240 - 3.253: 99.5894% ( 1) 00:15:19.937 3.253 - 3.267: 99.5944% ( 1) 00:15:19.937 3.307 - 3.320: 99.5993% ( 1) 00:15:19.937 3.347 - 3.360: 99.6043% ( 1) 00:15:19.937 3.600 - 3.627: 99.6092% ( 1) 00:15:19.937 3.787 - 3.813: 99.6142% ( 1) 00:15:19.937 3.893 - 3.920: 99.6191% ( 1) 00:15:19.937 4.000 - 4.027: 99.6241% ( 1) 00:15:19.937 4.133 - 4.160: 99.6290% ( 1) 00:15:19.937 4.213 - 4.240: 99.6389% ( 2) 00:15:19.937 4.507 - 4.533: 99.6438% ( 1) 00:15:19.937 4.907 - 4.933: 99.6488% ( 1) 00:15:19.937 4.933 - 4.960: 99.6537% ( 1) 00:15:19.937 4.960 - 4.987: 99.6636% ( 2) 00:15:19.937 4.987 - 5.013: 99.6735% ( 2) 00:15:19.937 5.013 - 5.040: 99.6785% ( 1) 00:15:19.937 5.067 - 5.093: 99.6834% ( 1) 00:15:19.937 5.093 - 5.120: 99.6884% ( 1) 00:15:19.937 5.120 - 5.147: 99.6983% ( 2) 00:15:19.937 5.173 - 5.200: 99.7032% ( 1) 00:15:19.937 5.253 - 5.280: 99.7082% ( 1) 00:15:19.937 5.307 - 5.333: 99.7131% ( 1) 00:15:19.937 5.680 - 5.707: 99.7230% ( 2) 00:15:19.937 5.760 - 5.787: 99.7279% ( 1) 00:15:19.937 5.787 - 5.813: 99.7329% ( 1) 00:15:19.937 5.813 - 5.840: 99.7378% ( 1) 00:15:19.937 5.840 - 5.867: 99.7428% ( 1) 00:15:19.937 5.867 - 5.893: 99.7477% ( 1) 00:15:19.937 5.893 - 5.920: 99.7527% ( 1) 00:15:19.937 5.973 - 6.000: 99.7576% ( 1) 00:15:19.937 6.000 - 6.027: 99.7626% ( 1) 00:15:19.937 6.027 - 6.053: 99.7725% ( 2) 00:15:19.937 6.053 - 6.080: 99.7824% ( 2) 00:15:19.937 6.080 - 6.107: 99.7873% ( 1) 00:15:19.937 6.107 - 6.133: 99.7922% ( 1) 00:15:19.937 6.133 - 6.160: 99.7972% ( 1) 00:15:19.937 6.187 - 6.213: 99.8021% ( 1) 00:15:19.937 6.240 - 6.267: 99.8120% ( 2) 00:15:19.937 6.267 - 6.293: 99.8170% ( 1) 00:15:19.937 6.293 - 6.320: 99.8219% ( 1) 00:15:19.937 6.373 - 6.400: 99.8318% ( 2) 00:15:19.937 6.427 - 6.453: 99.8368% ( 1) 00:15:19.937 6.453 - 6.480: 99.8417% ( 1) 00:15:19.937 6.667 - 6.693: 99.8467% ( 1) 00:15:19.937 6.720 - 6.747: 99.8516% ( 1) 00:15:19.937 6.747 - 6.773: 99.8615% ( 2) 00:15:19.937 6.773 - 6.800: 99.8664% ( 1) 00:15:19.937 6.827 - 6.880: 99.8714% ( 1) 00:15:19.937 6.987 - 7.040: 99.8813% ( 2) 00:15:19.937 7.147 - 7.200: 99.8862% ( 1) 00:15:19.937 7.413 - 7.467: 99.8912% ( 1) 00:15:19.937 7.467 - 7.520: 99.8961% ( 1) 00:15:19.937 7.733 - 7.787: 99.9011% ( 1) 00:15:19.937 [2024-10-11 11:49:04.179941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.937 7.893 - 7.947: 99.9060% ( 1) 00:15:19.937 3986.773 - 4014.080: 100.0000% ( 19) 00:15:19.937 00:15:19.937 Complete histogram 00:15:19.937 ================== 00:15:19.937 Range in us Cumulative Count 00:15:19.937 1.633 - 1.640: 0.2275% ( 46) 00:15:19.937 1.640 - 1.647: 0.7915% ( 114) 00:15:19.937 1.647 - 1.653: 0.8409% ( 10) 00:15:19.937 1.653 - 1.660: 0.9349% ( 19) 00:15:19.937 1.660 - 1.667: 0.9943% ( 12) 00:15:19.937 1.667 - 1.673: 0.9992% ( 1) 00:15:19.937 1.673 - 1.680: 1.0140% ( 3) 00:15:19.937 1.680 - 1.687: 2.5277% ( 306) 00:15:19.937 1.687 - 1.693: 42.5900% ( 8099) 00:15:19.937 1.693 - 1.700: 53.4032% ( 2186) 00:15:19.937 1.700 - 1.707: 61.3722% ( 1611) 00:15:19.937 1.707 - 1.720: 78.0520% ( 3372) 00:15:19.937 1.720 - 1.733: 83.1223% ( 1025) 00:15:19.937 1.733 - 1.747: 84.0127% ( 180) 00:15:19.937 1.747 - 1.760: 88.6624% ( 940) 00:15:19.937 1.760 - 1.773: 94.3807% ( 1156) 00:15:19.937 1.773 - 1.787: 97.6157% ( 654) 00:15:19.937 1.787 - 1.800: 98.9711% ( 274) 00:15:19.937 1.800 - 1.813: 99.3372% ( 74) 00:15:19.937 1.813 - 1.827: 99.3965% ( 12) 00:15:19.937 1.827 - 1.840: 99.4163% ( 4) 00:15:19.937 1.840 - 1.853: 99.4213% ( 1) 00:15:19.937 1.867 - 1.880: 99.4262% ( 1) 00:15:19.937 1.907 - 1.920: 99.4311% ( 1) 00:15:19.937 3.680 - 3.707: 99.4361% ( 1) 00:15:19.937 3.973 - 4.000: 99.4410% ( 1) 00:15:19.937 4.107 - 4.133: 99.4509% ( 2) 00:15:19.937 4.347 - 4.373: 99.4559% ( 1) 00:15:19.937 4.373 - 4.400: 99.4608% ( 1) 00:15:19.937 4.400 - 4.427: 99.4658% ( 1) 00:15:19.937 4.480 - 4.507: 99.4707% ( 1) 00:15:19.937 4.587 - 4.613: 99.4806% ( 2) 00:15:19.937 4.667 - 4.693: 99.4856% ( 1) 00:15:19.937 4.693 - 4.720: 99.4954% ( 2) 00:15:19.937 4.720 - 4.747: 99.5004% ( 1) 00:15:19.937 4.747 - 4.773: 99.5152% ( 3) 00:15:19.937 4.773 - 4.800: 99.5350% ( 4) 00:15:19.937 4.960 - 4.987: 99.5400% ( 1) 00:15:19.937 4.987 - 5.013: 99.5449% ( 1) 00:15:19.937 5.013 - 5.040: 99.5598% ( 3) 00:15:19.937 5.067 - 5.093: 99.5696% ( 2) 00:15:19.937 5.093 - 5.120: 99.5746% ( 1) 00:15:19.937 5.120 - 5.147: 99.5845% ( 2) 00:15:19.937 5.147 - 5.173: 99.5944% ( 2) 00:15:19.937 5.200 - 5.227: 99.5993% ( 1) 00:15:19.937 5.253 - 5.280: 99.6043% ( 1) 00:15:19.937 5.280 - 5.307: 99.6092% ( 1) 00:15:19.937 5.333 - 5.360: 99.6191% ( 2) 00:15:19.937 5.413 - 5.440: 99.6241% ( 1) 00:15:19.937 5.547 - 5.573: 99.6290% ( 1) 00:15:19.937 6.187 - 6.213: 99.6340% ( 1) 00:15:19.937 9.707 - 9.760: 99.6389% ( 1) 00:15:19.937 3986.773 - 4014.080: 100.0000% ( 73) 00:15:19.937 00:15:19.937 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:19.937 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:19.937 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:19.937 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:19.937 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:19.937 [ 00:15:19.937 { 00:15:19.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.937 "subtype": "Discovery", 00:15:19.937 "listen_addresses": [], 00:15:19.937 "allow_any_host": true, 00:15:19.937 "hosts": [] 00:15:19.937 }, 00:15:19.937 { 00:15:19.937 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:19.937 "subtype": "NVMe", 00:15:19.937 "listen_addresses": [ 00:15:19.937 { 00:15:19.937 "trtype": "VFIOUSER", 00:15:19.937 "adrfam": "IPv4", 00:15:19.937 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:19.937 "trsvcid": "0" 00:15:19.937 } 00:15:19.937 ], 00:15:19.937 "allow_any_host": true, 00:15:19.937 "hosts": [], 00:15:19.937 "serial_number": "SPDK1", 00:15:19.937 "model_number": "SPDK bdev Controller", 00:15:19.937 "max_namespaces": 32, 00:15:19.937 "min_cntlid": 1, 00:15:19.938 "max_cntlid": 65519, 00:15:19.938 "namespaces": [ 00:15:19.938 { 00:15:19.938 "nsid": 1, 00:15:19.938 "bdev_name": "Malloc1", 00:15:19.938 "name": "Malloc1", 00:15:19.938 "nguid": "8F3E22FEDEE64510BA68F29B0B3A1F27", 00:15:19.938 "uuid": "8f3e22fe-dee6-4510-ba68-f29b0b3a1f27" 00:15:19.938 } 00:15:19.938 ] 00:15:19.938 }, 00:15:19.938 { 00:15:19.938 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:19.938 "subtype": "NVMe", 00:15:19.938 "listen_addresses": [ 00:15:19.938 { 00:15:19.938 "trtype": "VFIOUSER", 00:15:19.938 "adrfam": "IPv4", 00:15:19.938 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:19.938 "trsvcid": "0" 00:15:19.938 } 00:15:19.938 ], 00:15:19.938 "allow_any_host": true, 00:15:19.938 "hosts": [], 00:15:19.938 "serial_number": "SPDK2", 00:15:19.938 "model_number": "SPDK bdev Controller", 00:15:19.938 "max_namespaces": 32, 00:15:19.938 "min_cntlid": 1, 00:15:19.938 "max_cntlid": 65519, 00:15:19.938 "namespaces": [ 00:15:19.938 { 00:15:19.938 "nsid": 1, 00:15:19.938 "bdev_name": "Malloc2", 00:15:19.938 "name": "Malloc2", 00:15:19.938 "nguid": "609ADFDB8B21431B86C14078DD97D2DC", 00:15:19.938 "uuid": "609adfdb-8b21-431b-86c1-4078dd97d2dc" 00:15:19.938 } 00:15:19.938 ] 00:15:19.938 } 00:15:19.938 ] 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=975615 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:19.938 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:19.938 [2024-10-11 11:49:04.548211] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:20.198 Malloc3 00:15:20.198 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:20.198 [2024-10-11 11:49:04.766660] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:20.198 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:20.198 Asynchronous Event Request test 00:15:20.198 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.198 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.198 Registering asynchronous event callbacks... 00:15:20.198 Starting namespace attribute notice tests for all controllers... 00:15:20.198 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:20.198 aer_cb - Changed Namespace 00:15:20.198 Cleaning up... 00:15:20.459 [ 00:15:20.459 { 00:15:20.459 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:20.459 "subtype": "Discovery", 00:15:20.459 "listen_addresses": [], 00:15:20.459 "allow_any_host": true, 00:15:20.459 "hosts": [] 00:15:20.459 }, 00:15:20.459 { 00:15:20.459 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:20.459 "subtype": "NVMe", 00:15:20.459 "listen_addresses": [ 00:15:20.459 { 00:15:20.459 "trtype": "VFIOUSER", 00:15:20.459 "adrfam": "IPv4", 00:15:20.459 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:20.459 "trsvcid": "0" 00:15:20.459 } 00:15:20.459 ], 00:15:20.459 "allow_any_host": true, 00:15:20.459 "hosts": [], 00:15:20.459 "serial_number": "SPDK1", 00:15:20.459 "model_number": "SPDK bdev Controller", 00:15:20.459 "max_namespaces": 32, 00:15:20.459 "min_cntlid": 1, 00:15:20.459 "max_cntlid": 65519, 00:15:20.459 "namespaces": [ 00:15:20.459 { 00:15:20.459 "nsid": 1, 00:15:20.459 "bdev_name": "Malloc1", 00:15:20.459 "name": "Malloc1", 00:15:20.459 "nguid": "8F3E22FEDEE64510BA68F29B0B3A1F27", 00:15:20.459 "uuid": "8f3e22fe-dee6-4510-ba68-f29b0b3a1f27" 00:15:20.459 }, 00:15:20.459 { 00:15:20.459 "nsid": 2, 00:15:20.459 "bdev_name": "Malloc3", 00:15:20.459 "name": "Malloc3", 00:15:20.459 "nguid": "4295A04E08DF484C9D67F655D0AD0B2A", 00:15:20.459 "uuid": "4295a04e-08df-484c-9d67-f655d0ad0b2a" 00:15:20.459 } 00:15:20.459 ] 00:15:20.459 }, 00:15:20.459 { 00:15:20.459 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:20.459 "subtype": "NVMe", 00:15:20.459 "listen_addresses": [ 00:15:20.459 { 00:15:20.459 "trtype": "VFIOUSER", 00:15:20.459 "adrfam": "IPv4", 00:15:20.459 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:20.459 "trsvcid": "0" 00:15:20.459 } 00:15:20.459 ], 00:15:20.459 "allow_any_host": true, 00:15:20.459 "hosts": [], 00:15:20.459 "serial_number": "SPDK2", 00:15:20.459 "model_number": "SPDK bdev Controller", 00:15:20.459 "max_namespaces": 32, 00:15:20.459 "min_cntlid": 1, 00:15:20.459 "max_cntlid": 65519, 00:15:20.459 "namespaces": [ 00:15:20.459 { 00:15:20.459 "nsid": 1, 00:15:20.459 "bdev_name": "Malloc2", 00:15:20.459 "name": "Malloc2", 00:15:20.459 "nguid": "609ADFDB8B21431B86C14078DD97D2DC", 00:15:20.459 "uuid": "609adfdb-8b21-431b-86c1-4078dd97d2dc" 00:15:20.459 } 00:15:20.459 ] 00:15:20.459 } 00:15:20.459 ] 00:15:20.459 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 975615 00:15:20.459 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.459 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:20.459 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:20.459 11:49:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:20.460 [2024-10-11 11:49:04.993788] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:15:20.460 [2024-10-11 11:49:04.993831] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975767 ] 00:15:20.460 [2024-10-11 11:49:05.021691] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:20.460 [2024-10-11 11:49:05.025567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:20.460 [2024-10-11 11:49:05.025585] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9b5f656000 00:15:20.460 [2024-10-11 11:49:05.026574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.460 [2024-10-11 11:49:05.027582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.460 [2024-10-11 11:49:05.028590] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.460 [2024-10-11 11:49:05.029595] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.460 [2024-10-11 11:49:05.030603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.460 [2024-10-11 11:49:05.031610] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.460 [2024-10-11 11:49:05.032617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.460 [2024-10-11 11:49:05.033622] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.460 [2024-10-11 11:49:05.034628] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:20.460 [2024-10-11 11:49:05.034639] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9b5f64b000 00:15:20.460 [2024-10-11 11:49:05.035553] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:20.460 [2024-10-11 11:49:05.046934] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:20.460 [2024-10-11 11:49:05.046953] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:20.460 [2024-10-11 11:49:05.052017] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:20.460 [2024-10-11 11:49:05.052051] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:20.460 [2024-10-11 11:49:05.052109] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:20.460 [2024-10-11 11:49:05.052121] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:20.460 [2024-10-11 11:49:05.052127] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:20.460 [2024-10-11 11:49:05.053023] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:20.460 [2024-10-11 11:49:05.053030] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:20.460 [2024-10-11 11:49:05.053035] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:20.460 [2024-10-11 11:49:05.054029] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:20.460 [2024-10-11 11:49:05.054036] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:20.460 [2024-10-11 11:49:05.054041] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:20.460 [2024-10-11 11:49:05.055034] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:20.460 [2024-10-11 11:49:05.055041] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:20.460 [2024-10-11 11:49:05.056041] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:20.460 [2024-10-11 11:49:05.056048] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:20.460 [2024-10-11 11:49:05.056051] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:20.460 [2024-10-11 11:49:05.056056] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:20.460 [2024-10-11 11:49:05.056160] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:20.460 [2024-10-11 11:49:05.056163] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:20.460 [2024-10-11 11:49:05.056167] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:20.460 [2024-10-11 11:49:05.057050] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:20.460 [2024-10-11 11:49:05.058053] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:20.460 [2024-10-11 11:49:05.059057] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:20.460 [2024-10-11 11:49:05.060058] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:20.460 [2024-10-11 11:49:05.060089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:20.460 [2024-10-11 11:49:05.061071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:20.460 [2024-10-11 11:49:05.061077] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:20.460 [2024-10-11 11:49:05.061081] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:20.460 [2024-10-11 11:49:05.061097] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:20.460 [2024-10-11 11:49:05.061102] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:20.460 [2024-10-11 11:49:05.061112] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.460 [2024-10-11 11:49:05.061116] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.460 [2024-10-11 11:49:05.061119] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.460 [2024-10-11 11:49:05.061127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.460 [2024-10-11 11:49:05.068674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:20.460 [2024-10-11 11:49:05.068690] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:20.460 [2024-10-11 11:49:05.068693] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:20.460 [2024-10-11 11:49:05.068696] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:20.460 [2024-10-11 11:49:05.068699] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:20.460 [2024-10-11 11:49:05.068703] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:20.460 [2024-10-11 11:49:05.068706] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:20.460 [2024-10-11 11:49:05.068709] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:20.460 [2024-10-11 11:49:05.068715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:20.460 [2024-10-11 11:49:05.068722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:20.460 [2024-10-11 11:49:05.076674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:20.460 [2024-10-11 11:49:05.076683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.460 [2024-10-11 11:49:05.076690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.460 [2024-10-11 11:49:05.076696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.460 [2024-10-11 11:49:05.076702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.460 [2024-10-11 11:49:05.076705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:20.460 [2024-10-11 11:49:05.076712] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:20.460 [2024-10-11 11:49:05.076718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:20.460 [2024-10-11 11:49:05.084672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:20.460 [2024-10-11 11:49:05.084677] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:20.460 [2024-10-11 11:49:05.084683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:20.460 [2024-10-11 11:49:05.084688] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:20.460 [2024-10-11 11:49:05.084693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:20.460 [2024-10-11 11:49:05.084699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.092672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.092718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.092724] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.092729] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:20.723 [2024-10-11 11:49:05.092732] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:20.723 [2024-10-11 11:49:05.092735] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.723 [2024-10-11 11:49:05.092740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.100673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.100681] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:20.723 [2024-10-11 11:49:05.100689] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.100695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.100700] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.723 [2024-10-11 11:49:05.100703] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.723 [2024-10-11 11:49:05.100705] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.723 [2024-10-11 11:49:05.100709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.108673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.108683] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.108688] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.108693] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.723 [2024-10-11 11:49:05.108696] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.723 [2024-10-11 11:49:05.108698] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.723 [2024-10-11 11:49:05.108703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.116673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.116680] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.116685] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.116691] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.116695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.116699] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.116702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.116706] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:20.723 [2024-10-11 11:49:05.116709] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:20.723 [2024-10-11 11:49:05.116713] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:20.723 [2024-10-11 11:49:05.116725] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.124672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.124682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.132672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.132682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.137690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.137700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.148673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.148685] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:20.723 [2024-10-11 11:49:05.148689] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:20.723 [2024-10-11 11:49:05.148691] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:20.723 [2024-10-11 11:49:05.148694] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:20.723 [2024-10-11 11:49:05.148696] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:20.723 [2024-10-11 11:49:05.148700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:20.723 [2024-10-11 11:49:05.148706] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:20.723 [2024-10-11 11:49:05.148709] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:20.723 [2024-10-11 11:49:05.148711] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.723 [2024-10-11 11:49:05.148717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.148722] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:20.723 [2024-10-11 11:49:05.148725] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.723 [2024-10-11 11:49:05.148728] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.723 [2024-10-11 11:49:05.148732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.148737] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:20.723 [2024-10-11 11:49:05.148740] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:20.723 [2024-10-11 11:49:05.148743] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.723 [2024-10-11 11:49:05.148747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:20.723 [2024-10-11 11:49:05.156673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.156683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.156690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:20.723 [2024-10-11 11:49:05.156695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:20.723 ===================================================== 00:15:20.723 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:20.723 ===================================================== 00:15:20.723 Controller Capabilities/Features 00:15:20.723 ================================ 00:15:20.723 Vendor ID: 4e58 00:15:20.723 Subsystem Vendor ID: 4e58 00:15:20.723 Serial Number: SPDK2 00:15:20.723 Model Number: SPDK bdev Controller 00:15:20.723 Firmware Version: 25.01 00:15:20.723 Recommended Arb Burst: 6 00:15:20.723 IEEE OUI Identifier: 8d 6b 50 00:15:20.723 Multi-path I/O 00:15:20.723 May have multiple subsystem ports: Yes 00:15:20.723 May have multiple controllers: Yes 00:15:20.723 Associated with SR-IOV VF: No 00:15:20.723 Max Data Transfer Size: 131072 00:15:20.723 Max Number of Namespaces: 32 00:15:20.723 Max Number of I/O Queues: 127 00:15:20.723 NVMe Specification Version (VS): 1.3 00:15:20.723 NVMe Specification Version (Identify): 1.3 00:15:20.723 Maximum Queue Entries: 256 00:15:20.723 Contiguous Queues Required: Yes 00:15:20.723 Arbitration Mechanisms Supported 00:15:20.723 Weighted Round Robin: Not Supported 00:15:20.723 Vendor Specific: Not Supported 00:15:20.723 Reset Timeout: 15000 ms 00:15:20.723 Doorbell Stride: 4 bytes 00:15:20.723 NVM Subsystem Reset: Not Supported 00:15:20.723 Command Sets Supported 00:15:20.723 NVM Command Set: Supported 00:15:20.723 Boot Partition: Not Supported 00:15:20.723 Memory Page Size Minimum: 4096 bytes 00:15:20.723 Memory Page Size Maximum: 4096 bytes 00:15:20.723 Persistent Memory Region: Not Supported 00:15:20.723 Optional Asynchronous Events Supported 00:15:20.723 Namespace Attribute Notices: Supported 00:15:20.723 Firmware Activation Notices: Not Supported 00:15:20.723 ANA Change Notices: Not Supported 00:15:20.723 PLE Aggregate Log Change Notices: Not Supported 00:15:20.723 LBA Status Info Alert Notices: Not Supported 00:15:20.723 EGE Aggregate Log Change Notices: Not Supported 00:15:20.723 Normal NVM Subsystem Shutdown event: Not Supported 00:15:20.723 Zone Descriptor Change Notices: Not Supported 00:15:20.723 Discovery Log Change Notices: Not Supported 00:15:20.723 Controller Attributes 00:15:20.724 128-bit Host Identifier: Supported 00:15:20.724 Non-Operational Permissive Mode: Not Supported 00:15:20.724 NVM Sets: Not Supported 00:15:20.724 Read Recovery Levels: Not Supported 00:15:20.724 Endurance Groups: Not Supported 00:15:20.724 Predictable Latency Mode: Not Supported 00:15:20.724 Traffic Based Keep ALive: Not Supported 00:15:20.724 Namespace Granularity: Not Supported 00:15:20.724 SQ Associations: Not Supported 00:15:20.724 UUID List: Not Supported 00:15:20.724 Multi-Domain Subsystem: Not Supported 00:15:20.724 Fixed Capacity Management: Not Supported 00:15:20.724 Variable Capacity Management: Not Supported 00:15:20.724 Delete Endurance Group: Not Supported 00:15:20.724 Delete NVM Set: Not Supported 00:15:20.724 Extended LBA Formats Supported: Not Supported 00:15:20.724 Flexible Data Placement Supported: Not Supported 00:15:20.724 00:15:20.724 Controller Memory Buffer Support 00:15:20.724 ================================ 00:15:20.724 Supported: No 00:15:20.724 00:15:20.724 Persistent Memory Region Support 00:15:20.724 ================================ 00:15:20.724 Supported: No 00:15:20.724 00:15:20.724 Admin Command Set Attributes 00:15:20.724 ============================ 00:15:20.724 Security Send/Receive: Not Supported 00:15:20.724 Format NVM: Not Supported 00:15:20.724 Firmware Activate/Download: Not Supported 00:15:20.724 Namespace Management: Not Supported 00:15:20.724 Device Self-Test: Not Supported 00:15:20.724 Directives: Not Supported 00:15:20.724 NVMe-MI: Not Supported 00:15:20.724 Virtualization Management: Not Supported 00:15:20.724 Doorbell Buffer Config: Not Supported 00:15:20.724 Get LBA Status Capability: Not Supported 00:15:20.724 Command & Feature Lockdown Capability: Not Supported 00:15:20.724 Abort Command Limit: 4 00:15:20.724 Async Event Request Limit: 4 00:15:20.724 Number of Firmware Slots: N/A 00:15:20.724 Firmware Slot 1 Read-Only: N/A 00:15:20.724 Firmware Activation Without Reset: N/A 00:15:20.724 Multiple Update Detection Support: N/A 00:15:20.724 Firmware Update Granularity: No Information Provided 00:15:20.724 Per-Namespace SMART Log: No 00:15:20.724 Asymmetric Namespace Access Log Page: Not Supported 00:15:20.724 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:20.724 Command Effects Log Page: Supported 00:15:20.724 Get Log Page Extended Data: Supported 00:15:20.724 Telemetry Log Pages: Not Supported 00:15:20.724 Persistent Event Log Pages: Not Supported 00:15:20.724 Supported Log Pages Log Page: May Support 00:15:20.724 Commands Supported & Effects Log Page: Not Supported 00:15:20.724 Feature Identifiers & Effects Log Page:May Support 00:15:20.724 NVMe-MI Commands & Effects Log Page: May Support 00:15:20.724 Data Area 4 for Telemetry Log: Not Supported 00:15:20.724 Error Log Page Entries Supported: 128 00:15:20.724 Keep Alive: Supported 00:15:20.724 Keep Alive Granularity: 10000 ms 00:15:20.724 00:15:20.724 NVM Command Set Attributes 00:15:20.724 ========================== 00:15:20.724 Submission Queue Entry Size 00:15:20.724 Max: 64 00:15:20.724 Min: 64 00:15:20.724 Completion Queue Entry Size 00:15:20.724 Max: 16 00:15:20.724 Min: 16 00:15:20.724 Number of Namespaces: 32 00:15:20.724 Compare Command: Supported 00:15:20.724 Write Uncorrectable Command: Not Supported 00:15:20.724 Dataset Management Command: Supported 00:15:20.724 Write Zeroes Command: Supported 00:15:20.724 Set Features Save Field: Not Supported 00:15:20.724 Reservations: Not Supported 00:15:20.724 Timestamp: Not Supported 00:15:20.724 Copy: Supported 00:15:20.724 Volatile Write Cache: Present 00:15:20.724 Atomic Write Unit (Normal): 1 00:15:20.724 Atomic Write Unit (PFail): 1 00:15:20.724 Atomic Compare & Write Unit: 1 00:15:20.724 Fused Compare & Write: Supported 00:15:20.724 Scatter-Gather List 00:15:20.724 SGL Command Set: Supported (Dword aligned) 00:15:20.724 SGL Keyed: Not Supported 00:15:20.724 SGL Bit Bucket Descriptor: Not Supported 00:15:20.724 SGL Metadata Pointer: Not Supported 00:15:20.724 Oversized SGL: Not Supported 00:15:20.724 SGL Metadata Address: Not Supported 00:15:20.724 SGL Offset: Not Supported 00:15:20.724 Transport SGL Data Block: Not Supported 00:15:20.724 Replay Protected Memory Block: Not Supported 00:15:20.724 00:15:20.724 Firmware Slot Information 00:15:20.724 ========================= 00:15:20.724 Active slot: 1 00:15:20.724 Slot 1 Firmware Revision: 25.01 00:15:20.724 00:15:20.724 00:15:20.724 Commands Supported and Effects 00:15:20.724 ============================== 00:15:20.724 Admin Commands 00:15:20.724 -------------- 00:15:20.724 Get Log Page (02h): Supported 00:15:20.724 Identify (06h): Supported 00:15:20.724 Abort (08h): Supported 00:15:20.724 Set Features (09h): Supported 00:15:20.724 Get Features (0Ah): Supported 00:15:20.724 Asynchronous Event Request (0Ch): Supported 00:15:20.724 Keep Alive (18h): Supported 00:15:20.724 I/O Commands 00:15:20.724 ------------ 00:15:20.724 Flush (00h): Supported LBA-Change 00:15:20.724 Write (01h): Supported LBA-Change 00:15:20.724 Read (02h): Supported 00:15:20.724 Compare (05h): Supported 00:15:20.724 Write Zeroes (08h): Supported LBA-Change 00:15:20.724 Dataset Management (09h): Supported LBA-Change 00:15:20.724 Copy (19h): Supported LBA-Change 00:15:20.724 00:15:20.724 Error Log 00:15:20.724 ========= 00:15:20.724 00:15:20.724 Arbitration 00:15:20.724 =========== 00:15:20.724 Arbitration Burst: 1 00:15:20.724 00:15:20.724 Power Management 00:15:20.724 ================ 00:15:20.724 Number of Power States: 1 00:15:20.724 Current Power State: Power State #0 00:15:20.724 Power State #0: 00:15:20.724 Max Power: 0.00 W 00:15:20.724 Non-Operational State: Operational 00:15:20.724 Entry Latency: Not Reported 00:15:20.724 Exit Latency: Not Reported 00:15:20.724 Relative Read Throughput: 0 00:15:20.724 Relative Read Latency: 0 00:15:20.724 Relative Write Throughput: 0 00:15:20.724 Relative Write Latency: 0 00:15:20.724 Idle Power: Not Reported 00:15:20.724 Active Power: Not Reported 00:15:20.724 Non-Operational Permissive Mode: Not Supported 00:15:20.724 00:15:20.724 Health Information 00:15:20.724 ================== 00:15:20.724 Critical Warnings: 00:15:20.724 Available Spare Space: OK 00:15:20.724 Temperature: OK 00:15:20.724 Device Reliability: OK 00:15:20.724 Read Only: No 00:15:20.724 Volatile Memory Backup: OK 00:15:20.724 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:20.724 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:20.724 Available Spare: 0% 00:15:20.724 Available Sp[2024-10-11 11:49:05.156767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:20.724 [2024-10-11 11:49:05.164673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:20.724 [2024-10-11 11:49:05.164698] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:20.724 [2024-10-11 11:49:05.164704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.724 [2024-10-11 11:49:05.164709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.724 [2024-10-11 11:49:05.164713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.724 [2024-10-11 11:49:05.164718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.724 [2024-10-11 11:49:05.164756] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:20.724 [2024-10-11 11:49:05.164763] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:20.724 [2024-10-11 11:49:05.165758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:20.724 [2024-10-11 11:49:05.165793] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:20.724 [2024-10-11 11:49:05.165798] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:20.724 [2024-10-11 11:49:05.166761] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:20.724 [2024-10-11 11:49:05.166769] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:20.724 [2024-10-11 11:49:05.166815] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:20.724 [2024-10-11 11:49:05.167778] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:20.724 are Threshold: 0% 00:15:20.724 Life Percentage Used: 0% 00:15:20.724 Data Units Read: 0 00:15:20.724 Data Units Written: 0 00:15:20.724 Host Read Commands: 0 00:15:20.724 Host Write Commands: 0 00:15:20.724 Controller Busy Time: 0 minutes 00:15:20.724 Power Cycles: 0 00:15:20.724 Power On Hours: 0 hours 00:15:20.724 Unsafe Shutdowns: 0 00:15:20.724 Unrecoverable Media Errors: 0 00:15:20.724 Lifetime Error Log Entries: 0 00:15:20.725 Warning Temperature Time: 0 minutes 00:15:20.725 Critical Temperature Time: 0 minutes 00:15:20.725 00:15:20.725 Number of Queues 00:15:20.725 ================ 00:15:20.725 Number of I/O Submission Queues: 127 00:15:20.725 Number of I/O Completion Queues: 127 00:15:20.725 00:15:20.725 Active Namespaces 00:15:20.725 ================= 00:15:20.725 Namespace ID:1 00:15:20.725 Error Recovery Timeout: Unlimited 00:15:20.725 Command Set Identifier: NVM (00h) 00:15:20.725 Deallocate: Supported 00:15:20.725 Deallocated/Unwritten Error: Not Supported 00:15:20.725 Deallocated Read Value: Unknown 00:15:20.725 Deallocate in Write Zeroes: Not Supported 00:15:20.725 Deallocated Guard Field: 0xFFFF 00:15:20.725 Flush: Supported 00:15:20.725 Reservation: Supported 00:15:20.725 Namespace Sharing Capabilities: Multiple Controllers 00:15:20.725 Size (in LBAs): 131072 (0GiB) 00:15:20.725 Capacity (in LBAs): 131072 (0GiB) 00:15:20.725 Utilization (in LBAs): 131072 (0GiB) 00:15:20.725 NGUID: 609ADFDB8B21431B86C14078DD97D2DC 00:15:20.725 UUID: 609adfdb-8b21-431b-86c1-4078dd97d2dc 00:15:20.725 Thin Provisioning: Not Supported 00:15:20.725 Per-NS Atomic Units: Yes 00:15:20.725 Atomic Boundary Size (Normal): 0 00:15:20.725 Atomic Boundary Size (PFail): 0 00:15:20.725 Atomic Boundary Offset: 0 00:15:20.725 Maximum Single Source Range Length: 65535 00:15:20.725 Maximum Copy Length: 65535 00:15:20.725 Maximum Source Range Count: 1 00:15:20.725 NGUID/EUI64 Never Reused: No 00:15:20.725 Namespace Write Protected: No 00:15:20.725 Number of LBA Formats: 1 00:15:20.725 Current LBA Format: LBA Format #00 00:15:20.725 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:20.725 00:15:20.725 11:49:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:20.725 [2024-10-11 11:49:05.335634] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.009 Initializing NVMe Controllers 00:15:26.009 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.009 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:26.009 Initialization complete. Launching workers. 00:15:26.009 ======================================================== 00:15:26.009 Latency(us) 00:15:26.009 Device Information : IOPS MiB/s Average min max 00:15:26.009 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40038.80 156.40 3196.98 842.53 7003.93 00:15:26.009 ======================================================== 00:15:26.009 Total : 40038.80 156.40 3196.98 842.53 7003.93 00:15:26.009 00:15:26.009 [2024-10-11 11:49:10.436852] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.009 11:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:26.009 [2024-10-11 11:49:10.620414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.300 Initializing NVMe Controllers 00:15:31.301 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.301 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:31.301 Initialization complete. Launching workers. 00:15:31.301 ======================================================== 00:15:31.301 Latency(us) 00:15:31.301 Device Information : IOPS MiB/s Average min max 00:15:31.301 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40084.00 156.58 3195.99 850.47 7160.54 00:15:31.301 ======================================================== 00:15:31.301 Total : 40084.00 156.58 3195.99 850.47 7160.54 00:15:31.301 00:15:31.301 [2024-10-11 11:49:15.640808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.301 11:49:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:31.301 [2024-10-11 11:49:15.827930] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.585 [2024-10-11 11:49:20.977755] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.585 Initializing NVMe Controllers 00:15:36.585 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.585 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:36.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:36.585 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:36.585 Initialization complete. Launching workers. 00:15:36.585 Starting thread on core 2 00:15:36.585 Starting thread on core 3 00:15:36.585 Starting thread on core 1 00:15:36.585 11:49:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:36.585 [2024-10-11 11:49:21.212027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.886 [2024-10-11 11:49:24.286931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.886 Initializing NVMe Controllers 00:15:39.886 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.886 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.886 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:39.886 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:39.886 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:39.886 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:39.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:39.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:39.886 Initialization complete. Launching workers. 00:15:39.886 Starting thread on core 1 with urgent priority queue 00:15:39.886 Starting thread on core 2 with urgent priority queue 00:15:39.886 Starting thread on core 3 with urgent priority queue 00:15:39.886 Starting thread on core 0 with urgent priority queue 00:15:39.886 SPDK bdev Controller (SPDK2 ) core 0: 14416.00 IO/s 6.94 secs/100000 ios 00:15:39.886 SPDK bdev Controller (SPDK2 ) core 1: 8434.67 IO/s 11.86 secs/100000 ios 00:15:39.886 SPDK bdev Controller (SPDK2 ) core 2: 11595.33 IO/s 8.62 secs/100000 ios 00:15:39.886 SPDK bdev Controller (SPDK2 ) core 3: 11854.67 IO/s 8.44 secs/100000 ios 00:15:39.886 ======================================================== 00:15:39.886 00:15:39.886 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:39.886 [2024-10-11 11:49:24.517122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.147 Initializing NVMe Controllers 00:15:40.147 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.147 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.147 Namespace ID: 1 size: 0GB 00:15:40.147 Initialization complete. 00:15:40.147 INFO: using host memory buffer for IO 00:15:40.147 Hello world! 00:15:40.147 [2024-10-11 11:49:24.529198] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.147 11:49:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:40.147 [2024-10-11 11:49:24.753377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.529 Initializing NVMe Controllers 00:15:41.529 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.529 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.529 Initialization complete. Launching workers. 00:15:41.529 submit (in ns) avg, min, max = 7121.3, 2844.2, 3999635.0 00:15:41.529 complete (in ns) avg, min, max = 15462.2, 1634.2, 4082549.2 00:15:41.529 00:15:41.529 Submit histogram 00:15:41.529 ================ 00:15:41.529 Range in us Cumulative Count 00:15:41.529 2.840 - 2.853: 0.1425% ( 29) 00:15:41.529 2.853 - 2.867: 1.5475% ( 286) 00:15:41.529 2.867 - 2.880: 3.8713% ( 473) 00:15:41.529 2.880 - 2.893: 7.6345% ( 766) 00:15:41.529 2.893 - 2.907: 12.6062% ( 1012) 00:15:41.529 2.907 - 2.920: 18.2216% ( 1143) 00:15:41.529 2.920 - 2.933: 23.5225% ( 1079) 00:15:41.529 2.933 - 2.947: 29.8010% ( 1278) 00:15:41.529 2.947 - 2.960: 35.0528% ( 1069) 00:15:41.529 2.960 - 2.973: 39.9705% ( 1001) 00:15:41.529 2.973 - 2.987: 45.3009% ( 1085) 00:15:41.529 2.987 - 3.000: 50.6264% ( 1084) 00:15:41.529 3.000 - 3.013: 57.0474% ( 1307) 00:15:41.529 3.013 - 3.027: 65.8462% ( 1791) 00:15:41.529 3.027 - 3.040: 74.4682% ( 1755) 00:15:41.529 3.040 - 3.053: 81.7440% ( 1481) 00:15:41.529 3.053 - 3.067: 88.5974% ( 1395) 00:15:41.529 3.067 - 3.080: 93.7902% ( 1057) 00:15:41.529 3.080 - 3.093: 96.7821% ( 609) 00:15:41.529 3.093 - 3.107: 98.4230% ( 334) 00:15:41.529 3.107 - 3.120: 99.1010% ( 138) 00:15:41.529 3.120 - 3.133: 99.3515% ( 51) 00:15:41.529 3.133 - 3.147: 99.4498% ( 20) 00:15:41.529 3.147 - 3.160: 99.4596% ( 2) 00:15:41.529 3.160 - 3.173: 99.4743% ( 3) 00:15:41.529 3.187 - 3.200: 99.4792% ( 1) 00:15:41.529 3.200 - 3.213: 99.4842% ( 1) 00:15:41.529 3.227 - 3.240: 99.4891% ( 1) 00:15:41.529 3.360 - 3.373: 99.4940% ( 1) 00:15:41.529 3.440 - 3.467: 99.4989% ( 1) 00:15:41.529 3.520 - 3.547: 99.5038% ( 1) 00:15:41.529 3.600 - 3.627: 99.5136% ( 2) 00:15:41.529 3.787 - 3.813: 99.5185% ( 1) 00:15:41.529 3.840 - 3.867: 99.5235% ( 1) 00:15:41.529 3.867 - 3.893: 99.5284% ( 1) 00:15:41.529 4.133 - 4.160: 99.5333% ( 1) 00:15:41.529 4.187 - 4.213: 99.5382% ( 1) 00:15:41.529 4.320 - 4.347: 99.5431% ( 1) 00:15:41.529 4.373 - 4.400: 99.5480% ( 1) 00:15:41.530 4.400 - 4.427: 99.5529% ( 1) 00:15:41.530 4.453 - 4.480: 99.5628% ( 2) 00:15:41.530 4.613 - 4.640: 99.5677% ( 1) 00:15:41.530 4.640 - 4.667: 99.5726% ( 1) 00:15:41.530 4.747 - 4.773: 99.5824% ( 2) 00:15:41.530 4.800 - 4.827: 99.5922% ( 2) 00:15:41.530 4.827 - 4.853: 99.6119% ( 4) 00:15:41.530 4.880 - 4.907: 99.6168% ( 1) 00:15:41.530 4.933 - 4.960: 99.6266% ( 2) 00:15:41.530 4.960 - 4.987: 99.6315% ( 1) 00:15:41.530 5.013 - 5.040: 99.6512% ( 4) 00:15:41.530 5.040 - 5.067: 99.6561% ( 1) 00:15:41.530 5.067 - 5.093: 99.6659% ( 2) 00:15:41.530 5.093 - 5.120: 99.6758% ( 2) 00:15:41.530 5.120 - 5.147: 99.6807% ( 1) 00:15:41.530 5.147 - 5.173: 99.6856% ( 1) 00:15:41.530 5.173 - 5.200: 99.6905% ( 1) 00:15:41.530 5.227 - 5.253: 99.6954% ( 1) 00:15:41.530 5.493 - 5.520: 99.7003% ( 1) 00:15:41.530 5.627 - 5.653: 99.7101% ( 2) 00:15:41.530 5.653 - 5.680: 99.7151% ( 1) 00:15:41.530 5.867 - 5.893: 99.7249% ( 2) 00:15:41.530 5.893 - 5.920: 99.7347% ( 2) 00:15:41.530 5.920 - 5.947: 99.7445% ( 2) 00:15:41.530 5.973 - 6.000: 99.7494% ( 1) 00:15:41.530 6.027 - 6.053: 99.7544% ( 1) 00:15:41.530 6.080 - 6.107: 99.7593% ( 1) 00:15:41.530 6.133 - 6.160: 99.7691% ( 2) 00:15:41.530 6.160 - 6.187: 99.7740% ( 1) 00:15:41.530 6.293 - 6.320: 99.7789% ( 1) 00:15:41.530 6.373 - 6.400: 99.7887% ( 2) 00:15:41.530 6.400 - 6.427: 99.7937% ( 1) 00:15:41.530 6.613 - 6.640: 99.7986% ( 1) 00:15:41.530 6.693 - 6.720: 99.8035% ( 1) 00:15:41.530 6.773 - 6.800: 99.8084% ( 1) 00:15:41.530 6.827 - 6.880: 99.8182% ( 2) 00:15:41.530 6.880 - 6.933: 99.8330% ( 3) 00:15:41.530 7.093 - 7.147: 99.8379% ( 1) 00:15:41.530 7.200 - 7.253: 99.8428% ( 1) 00:15:41.530 7.253 - 7.307: 99.8477% ( 1) 00:15:41.530 [2024-10-11 11:49:25.848234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.530 7.413 - 7.467: 99.8526% ( 1) 00:15:41.530 7.627 - 7.680: 99.8575% ( 1) 00:15:41.530 7.840 - 7.893: 99.8674% ( 2) 00:15:41.530 7.893 - 7.947: 99.8772% ( 2) 00:15:41.530 8.000 - 8.053: 99.8821% ( 1) 00:15:41.530 8.213 - 8.267: 99.8870% ( 1) 00:15:41.530 11.413 - 11.467: 99.8919% ( 1) 00:15:41.530 12.160 - 12.213: 99.8968% ( 1) 00:15:41.530 3986.773 - 4014.080: 100.0000% ( 21) 00:15:41.530 00:15:41.530 Complete histogram 00:15:41.530 ================== 00:15:41.530 Range in us Cumulative Count 00:15:41.530 1.633 - 1.640: 0.0197% ( 4) 00:15:41.530 1.640 - 1.647: 0.4962% ( 97) 00:15:41.530 1.647 - 1.653: 0.6731% ( 36) 00:15:41.530 1.653 - 1.660: 0.7074% ( 7) 00:15:41.530 1.660 - 1.667: 0.8106% ( 21) 00:15:41.530 1.667 - 1.673: 0.8548% ( 9) 00:15:41.530 1.673 - 1.680: 0.8941% ( 8) 00:15:41.530 1.680 - 1.687: 10.1105% ( 1876) 00:15:41.530 1.687 - 1.693: 50.9850% ( 8320) 00:15:41.530 1.693 - 1.700: 55.9519% ( 1011) 00:15:41.530 1.700 - 1.707: 68.2879% ( 2511) 00:15:41.530 1.707 - 1.720: 79.2533% ( 2232) 00:15:41.530 1.720 - 1.733: 83.1540% ( 794) 00:15:41.530 1.733 - 1.747: 84.6082% ( 296) 00:15:41.530 1.747 - 1.760: 89.2361% ( 942) 00:15:41.530 1.760 - 1.773: 95.1265% ( 1199) 00:15:41.530 1.773 - 1.787: 97.9219% ( 569) 00:15:41.530 1.787 - 1.800: 99.0862% ( 237) 00:15:41.530 1.800 - 1.813: 99.4399% ( 72) 00:15:41.530 1.813 - 1.827: 99.4743% ( 7) 00:15:41.530 1.827 - 1.840: 99.4891% ( 3) 00:15:41.530 3.520 - 3.547: 99.4989% ( 2) 00:15:41.530 3.547 - 3.573: 99.5038% ( 1) 00:15:41.530 3.733 - 3.760: 99.5087% ( 1) 00:15:41.530 3.840 - 3.867: 99.5185% ( 2) 00:15:41.530 3.973 - 4.000: 99.5235% ( 1) 00:15:41.530 4.347 - 4.373: 99.5333% ( 2) 00:15:41.530 4.427 - 4.453: 99.5382% ( 1) 00:15:41.530 4.693 - 4.720: 99.5431% ( 1) 00:15:41.530 4.747 - 4.773: 99.5480% ( 1) 00:15:41.530 4.960 - 4.987: 99.5529% ( 1) 00:15:41.530 4.987 - 5.013: 99.5578% ( 1) 00:15:41.530 5.200 - 5.227: 99.5677% ( 2) 00:15:41.530 5.280 - 5.307: 99.5726% ( 1) 00:15:41.530 5.360 - 5.387: 99.5775% ( 1) 00:15:41.530 5.467 - 5.493: 99.5824% ( 1) 00:15:41.530 5.547 - 5.573: 99.5873% ( 1) 00:15:41.530 5.680 - 5.707: 99.5922% ( 1) 00:15:41.530 5.813 - 5.840: 99.5972% ( 1) 00:15:41.530 5.973 - 6.000: 99.6021% ( 1) 00:15:41.530 6.160 - 6.187: 99.6070% ( 1) 00:15:41.530 6.453 - 6.480: 99.6119% ( 1) 00:15:41.530 6.720 - 6.747: 99.6168% ( 1) 00:15:41.530 6.933 - 6.987: 99.6266% ( 2) 00:15:41.530 6.987 - 7.040: 99.6315% ( 1) 00:15:41.530 7.467 - 7.520: 99.6365% ( 1) 00:15:41.530 7.893 - 7.947: 99.6414% ( 1) 00:15:41.530 8.160 - 8.213: 99.6463% ( 1) 00:15:41.530 11.360 - 11.413: 99.6512% ( 1) 00:15:41.530 12.907 - 12.960: 99.6561% ( 1) 00:15:41.530 3986.773 - 4014.080: 99.9754% ( 65) 00:15:41.530 4014.080 - 4041.387: 99.9951% ( 4) 00:15:41.530 4068.693 - 4096.000: 100.0000% ( 1) 00:15:41.530 00:15:41.530 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:41.530 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:41.530 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:41.530 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:41.530 11:49:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:41.530 [ 00:15:41.530 { 00:15:41.530 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:41.530 "subtype": "Discovery", 00:15:41.530 "listen_addresses": [], 00:15:41.530 "allow_any_host": true, 00:15:41.530 "hosts": [] 00:15:41.530 }, 00:15:41.530 { 00:15:41.530 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:41.530 "subtype": "NVMe", 00:15:41.530 "listen_addresses": [ 00:15:41.530 { 00:15:41.530 "trtype": "VFIOUSER", 00:15:41.530 "adrfam": "IPv4", 00:15:41.530 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:41.530 "trsvcid": "0" 00:15:41.530 } 00:15:41.530 ], 00:15:41.530 "allow_any_host": true, 00:15:41.530 "hosts": [], 00:15:41.530 "serial_number": "SPDK1", 00:15:41.530 "model_number": "SPDK bdev Controller", 00:15:41.530 "max_namespaces": 32, 00:15:41.530 "min_cntlid": 1, 00:15:41.530 "max_cntlid": 65519, 00:15:41.530 "namespaces": [ 00:15:41.530 { 00:15:41.530 "nsid": 1, 00:15:41.530 "bdev_name": "Malloc1", 00:15:41.530 "name": "Malloc1", 00:15:41.530 "nguid": "8F3E22FEDEE64510BA68F29B0B3A1F27", 00:15:41.530 "uuid": "8f3e22fe-dee6-4510-ba68-f29b0b3a1f27" 00:15:41.530 }, 00:15:41.530 { 00:15:41.530 "nsid": 2, 00:15:41.530 "bdev_name": "Malloc3", 00:15:41.530 "name": "Malloc3", 00:15:41.530 "nguid": "4295A04E08DF484C9D67F655D0AD0B2A", 00:15:41.530 "uuid": "4295a04e-08df-484c-9d67-f655d0ad0b2a" 00:15:41.530 } 00:15:41.530 ] 00:15:41.530 }, 00:15:41.530 { 00:15:41.530 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:41.530 "subtype": "NVMe", 00:15:41.530 "listen_addresses": [ 00:15:41.530 { 00:15:41.530 "trtype": "VFIOUSER", 00:15:41.530 "adrfam": "IPv4", 00:15:41.530 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:41.530 "trsvcid": "0" 00:15:41.530 } 00:15:41.530 ], 00:15:41.530 "allow_any_host": true, 00:15:41.530 "hosts": [], 00:15:41.530 "serial_number": "SPDK2", 00:15:41.530 "model_number": "SPDK bdev Controller", 00:15:41.530 "max_namespaces": 32, 00:15:41.530 "min_cntlid": 1, 00:15:41.530 "max_cntlid": 65519, 00:15:41.530 "namespaces": [ 00:15:41.530 { 00:15:41.530 "nsid": 1, 00:15:41.530 "bdev_name": "Malloc2", 00:15:41.530 "name": "Malloc2", 00:15:41.530 "nguid": "609ADFDB8B21431B86C14078DD97D2DC", 00:15:41.530 "uuid": "609adfdb-8b21-431b-86c1-4078dd97d2dc" 00:15:41.530 } 00:15:41.530 ] 00:15:41.530 } 00:15:41.530 ] 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=979801 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:41.530 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:41.790 [2024-10-11 11:49:26.208048] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.790 Malloc4 00:15:41.790 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:41.790 [2024-10-11 11:49:26.418453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:42.052 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.052 Asynchronous Event Request test 00:15:42.052 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.052 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.052 Registering asynchronous event callbacks... 00:15:42.052 Starting namespace attribute notice tests for all controllers... 00:15:42.052 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:42.052 aer_cb - Changed Namespace 00:15:42.052 Cleaning up... 00:15:42.052 [ 00:15:42.052 { 00:15:42.052 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.052 "subtype": "Discovery", 00:15:42.052 "listen_addresses": [], 00:15:42.052 "allow_any_host": true, 00:15:42.052 "hosts": [] 00:15:42.052 }, 00:15:42.052 { 00:15:42.052 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.052 "subtype": "NVMe", 00:15:42.052 "listen_addresses": [ 00:15:42.052 { 00:15:42.052 "trtype": "VFIOUSER", 00:15:42.052 "adrfam": "IPv4", 00:15:42.052 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.052 "trsvcid": "0" 00:15:42.052 } 00:15:42.052 ], 00:15:42.052 "allow_any_host": true, 00:15:42.052 "hosts": [], 00:15:42.052 "serial_number": "SPDK1", 00:15:42.052 "model_number": "SPDK bdev Controller", 00:15:42.052 "max_namespaces": 32, 00:15:42.052 "min_cntlid": 1, 00:15:42.052 "max_cntlid": 65519, 00:15:42.052 "namespaces": [ 00:15:42.052 { 00:15:42.052 "nsid": 1, 00:15:42.052 "bdev_name": "Malloc1", 00:15:42.052 "name": "Malloc1", 00:15:42.052 "nguid": "8F3E22FEDEE64510BA68F29B0B3A1F27", 00:15:42.052 "uuid": "8f3e22fe-dee6-4510-ba68-f29b0b3a1f27" 00:15:42.052 }, 00:15:42.052 { 00:15:42.052 "nsid": 2, 00:15:42.052 "bdev_name": "Malloc3", 00:15:42.052 "name": "Malloc3", 00:15:42.052 "nguid": "4295A04E08DF484C9D67F655D0AD0B2A", 00:15:42.052 "uuid": "4295a04e-08df-484c-9d67-f655d0ad0b2a" 00:15:42.052 } 00:15:42.052 ] 00:15:42.052 }, 00:15:42.052 { 00:15:42.052 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.052 "subtype": "NVMe", 00:15:42.052 "listen_addresses": [ 00:15:42.052 { 00:15:42.052 "trtype": "VFIOUSER", 00:15:42.052 "adrfam": "IPv4", 00:15:42.052 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.052 "trsvcid": "0" 00:15:42.052 } 00:15:42.052 ], 00:15:42.052 "allow_any_host": true, 00:15:42.052 "hosts": [], 00:15:42.052 "serial_number": "SPDK2", 00:15:42.052 "model_number": "SPDK bdev Controller", 00:15:42.052 "max_namespaces": 32, 00:15:42.052 "min_cntlid": 1, 00:15:42.052 "max_cntlid": 65519, 00:15:42.052 "namespaces": [ 00:15:42.052 { 00:15:42.052 "nsid": 1, 00:15:42.052 "bdev_name": "Malloc2", 00:15:42.052 "name": "Malloc2", 00:15:42.052 "nguid": "609ADFDB8B21431B86C14078DD97D2DC", 00:15:42.052 "uuid": "609adfdb-8b21-431b-86c1-4078dd97d2dc" 00:15:42.052 }, 00:15:42.052 { 00:15:42.052 "nsid": 2, 00:15:42.052 "bdev_name": "Malloc4", 00:15:42.052 "name": "Malloc4", 00:15:42.052 "nguid": "7F791634D6B64D5F8124AE35B527B9A1", 00:15:42.052 "uuid": "7f791634-d6b6-4d5f-8124-ae35b527b9a1" 00:15:42.052 } 00:15:42.052 ] 00:15:42.052 } 00:15:42.052 ] 00:15:42.052 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 979801 00:15:42.052 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:42.052 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 970816 00:15:42.052 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 970816 ']' 00:15:42.052 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 970816 00:15:42.052 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:42.052 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.052 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 970816 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 970816' 00:15:42.313 killing process with pid 970816 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 970816 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 970816 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=979983 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 979983' 00:15:42.313 Process pid: 979983 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 979983 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 979983 ']' 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.313 11:49:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:42.313 [2024-10-11 11:49:26.892339] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:42.313 [2024-10-11 11:49:26.893282] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:15:42.313 [2024-10-11 11:49:26.893329] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.575 [2024-10-11 11:49:26.970977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.575 [2024-10-11 11:49:27.004641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.575 [2024-10-11 11:49:27.004678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.575 [2024-10-11 11:49:27.004684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.575 [2024-10-11 11:49:27.004689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.575 [2024-10-11 11:49:27.004693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.576 [2024-10-11 11:49:27.006070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.576 [2024-10-11 11:49:27.006305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.576 [2024-10-11 11:49:27.006458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.576 [2024-10-11 11:49:27.006459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.576 [2024-10-11 11:49:27.058837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:42.576 [2024-10-11 11:49:27.059813] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:42.576 [2024-10-11 11:49:27.060618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:42.576 [2024-10-11 11:49:27.061204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:42.576 [2024-10-11 11:49:27.061221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:43.148 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.148 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:43.148 11:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:44.090 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:44.351 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:44.351 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:44.351 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.351 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:44.351 11:49:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:44.612 Malloc1 00:15:44.612 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:44.873 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:45.133 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:45.133 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:45.133 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:45.133 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:45.394 Malloc2 00:15:45.394 11:49:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:45.654 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:45.654 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 979983 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 979983 ']' 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 979983 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 979983 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 979983' 00:15:45.914 killing process with pid 979983 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 979983 00:15:45.914 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 979983 00:15:46.174 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:46.174 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:46.174 00:15:46.174 real 0m50.845s 00:15:46.174 user 3m14.780s 00:15:46.174 sys 0m2.733s 00:15:46.174 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:46.174 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:46.174 ************************************ 00:15:46.174 END TEST nvmf_vfio_user 00:15:46.174 ************************************ 00:15:46.174 11:49:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:46.174 11:49:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:46.174 11:49:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:46.174 11:49:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.174 ************************************ 00:15:46.174 START TEST nvmf_vfio_user_nvme_compliance 00:15:46.174 ************************************ 00:15:46.174 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:46.174 * Looking for test storage... 00:15:46.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:46.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.436 --rc genhtml_branch_coverage=1 00:15:46.436 --rc genhtml_function_coverage=1 00:15:46.436 --rc genhtml_legend=1 00:15:46.436 --rc geninfo_all_blocks=1 00:15:46.436 --rc geninfo_unexecuted_blocks=1 00:15:46.436 00:15:46.436 ' 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:46.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.436 --rc genhtml_branch_coverage=1 00:15:46.436 --rc genhtml_function_coverage=1 00:15:46.436 --rc genhtml_legend=1 00:15:46.436 --rc geninfo_all_blocks=1 00:15:46.436 --rc geninfo_unexecuted_blocks=1 00:15:46.436 00:15:46.436 ' 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:46.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.436 --rc genhtml_branch_coverage=1 00:15:46.436 --rc genhtml_function_coverage=1 00:15:46.436 --rc genhtml_legend=1 00:15:46.436 --rc geninfo_all_blocks=1 00:15:46.436 --rc geninfo_unexecuted_blocks=1 00:15:46.436 00:15:46.436 ' 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:46.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.436 --rc genhtml_branch_coverage=1 00:15:46.436 --rc genhtml_function_coverage=1 00:15:46.436 --rc genhtml_legend=1 00:15:46.436 --rc geninfo_all_blocks=1 00:15:46.436 --rc geninfo_unexecuted_blocks=1 00:15:46.436 00:15:46.436 ' 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.436 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:46.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=980888 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 980888' 00:15:46.437 Process pid: 980888 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 980888 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 980888 ']' 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.437 11:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.437 [2024-10-11 11:49:31.001720] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:15:46.437 [2024-10-11 11:49:31.001778] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.437 [2024-10-11 11:49:31.065872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:46.698 [2024-10-11 11:49:31.106139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.698 [2024-10-11 11:49:31.106177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.698 [2024-10-11 11:49:31.106187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.698 [2024-10-11 11:49:31.106196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.698 [2024-10-11 11:49:31.106202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.698 [2024-10-11 11:49:31.107646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.698 [2024-10-11 11:49:31.107803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.698 [2024-10-11 11:49:31.107803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.698 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.698 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:46.698 11:49:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.639 malloc0 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.639 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.898 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.898 11:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:47.898 00:15:47.898 00:15:47.898 CUnit - A unit testing framework for C - Version 2.1-3 00:15:47.898 http://cunit.sourceforge.net/ 00:15:47.898 00:15:47.898 00:15:47.898 Suite: nvme_compliance 00:15:47.898 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-11 11:49:32.417065] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.898 [2024-10-11 11:49:32.418369] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:47.898 [2024-10-11 11:49:32.418382] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:47.898 [2024-10-11 11:49:32.418387] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:47.898 [2024-10-11 11:49:32.420087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.898 passed 00:15:47.898 Test: admin_identify_ctrlr_verify_fused ...[2024-10-11 11:49:32.497586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.898 [2024-10-11 11:49:32.500610] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.898 passed 00:15:48.157 Test: admin_identify_ns ...[2024-10-11 11:49:32.579161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.157 [2024-10-11 11:49:32.639675] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:48.157 [2024-10-11 11:49:32.647683] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:48.157 [2024-10-11 11:49:32.668760] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.157 passed 00:15:48.158 Test: admin_get_features_mandatory_features ...[2024-10-11 11:49:32.742054] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.158 [2024-10-11 11:49:32.745066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.158 passed 00:15:48.418 Test: admin_get_features_optional_features ...[2024-10-11 11:49:32.821520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.418 [2024-10-11 11:49:32.824543] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.418 passed 00:15:48.418 Test: admin_set_features_number_of_queues ...[2024-10-11 11:49:32.900273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.418 [2024-10-11 11:49:33.004752] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.418 passed 00:15:48.678 Test: admin_get_log_page_mandatory_logs ...[2024-10-11 11:49:33.079845] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.678 [2024-10-11 11:49:33.082872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.678 passed 00:15:48.678 Test: admin_get_log_page_with_lpo ...[2024-10-11 11:49:33.158014] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.679 [2024-10-11 11:49:33.229675] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:48.679 [2024-10-11 11:49:33.242736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.679 passed 00:15:48.940 Test: fabric_property_get ...[2024-10-11 11:49:33.314029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.940 [2024-10-11 11:49:33.315230] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:48.940 [2024-10-11 11:49:33.317051] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.940 passed 00:15:48.940 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-11 11:49:33.394497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.940 [2024-10-11 11:49:33.395690] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:48.940 [2024-10-11 11:49:33.397516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.940 passed 00:15:48.940 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-11 11:49:33.472029] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.940 [2024-10-11 11:49:33.555677] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.940 [2024-10-11 11:49:33.571673] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:49.201 [2024-10-11 11:49:33.576745] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.201 passed 00:15:49.201 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-11 11:49:33.650840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.201 [2024-10-11 11:49:33.652038] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:49.201 [2024-10-11 11:49:33.653864] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.201 passed 00:15:49.201 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-11 11:49:33.730596] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.201 [2024-10-11 11:49:33.805674] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:49.201 [2024-10-11 11:49:33.831675] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:49.461 [2024-10-11 11:49:33.836744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.461 passed 00:15:49.461 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-11 11:49:33.909017] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.461 [2024-10-11 11:49:33.910212] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:49.461 [2024-10-11 11:49:33.910232] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:49.461 [2024-10-11 11:49:33.914038] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.461 passed 00:15:49.461 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-11 11:49:33.990045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.461 [2024-10-11 11:49:34.082675] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:49.461 [2024-10-11 11:49:34.090679] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:49.721 [2024-10-11 11:49:34.098675] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:49.721 [2024-10-11 11:49:34.106673] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:49.721 [2024-10-11 11:49:34.135748] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.721 passed 00:15:49.721 Test: admin_create_io_sq_verify_pc ...[2024-10-11 11:49:34.207048] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.721 [2024-10-11 11:49:34.223678] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:49.721 [2024-10-11 11:49:34.241213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.721 passed 00:15:49.721 Test: admin_create_io_qp_max_qps ...[2024-10-11 11:49:34.319694] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.110 [2024-10-11 11:49:35.403675] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:51.371 [2024-10-11 11:49:35.783228] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.371 passed 00:15:51.371 Test: admin_create_io_sq_shared_cq ...[2024-10-11 11:49:35.860023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.371 [2024-10-11 11:49:35.991672] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:51.630 [2024-10-11 11:49:36.028716] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.630 passed 00:15:51.630 00:15:51.630 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.630 suites 1 1 n/a 0 0 00:15:51.630 tests 18 18 18 0 0 00:15:51.630 asserts 360 360 360 0 n/a 00:15:51.630 00:15:51.630 Elapsed time = 1.482 seconds 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 980888 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 980888 ']' 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 980888 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 980888 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 980888' 00:15:51.630 killing process with pid 980888 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 980888 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 980888 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:51.630 00:15:51.630 real 0m5.541s 00:15:51.630 user 0m15.538s 00:15:51.630 sys 0m0.504s 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.630 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.631 ************************************ 00:15:51.631 END TEST nvmf_vfio_user_nvme_compliance 00:15:51.631 ************************************ 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.891 ************************************ 00:15:51.891 START TEST nvmf_vfio_user_fuzz 00:15:51.891 ************************************ 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:51.891 * Looking for test storage... 00:15:51.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.891 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:52.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.152 --rc genhtml_branch_coverage=1 00:15:52.152 --rc genhtml_function_coverage=1 00:15:52.152 --rc genhtml_legend=1 00:15:52.152 --rc geninfo_all_blocks=1 00:15:52.152 --rc geninfo_unexecuted_blocks=1 00:15:52.152 00:15:52.152 ' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:52.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.152 --rc genhtml_branch_coverage=1 00:15:52.152 --rc genhtml_function_coverage=1 00:15:52.152 --rc genhtml_legend=1 00:15:52.152 --rc geninfo_all_blocks=1 00:15:52.152 --rc geninfo_unexecuted_blocks=1 00:15:52.152 00:15:52.152 ' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:52.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.152 --rc genhtml_branch_coverage=1 00:15:52.152 --rc genhtml_function_coverage=1 00:15:52.152 --rc genhtml_legend=1 00:15:52.152 --rc geninfo_all_blocks=1 00:15:52.152 --rc geninfo_unexecuted_blocks=1 00:15:52.152 00:15:52.152 ' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:52.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.152 --rc genhtml_branch_coverage=1 00:15:52.152 --rc genhtml_function_coverage=1 00:15:52.152 --rc genhtml_legend=1 00:15:52.152 --rc geninfo_all_blocks=1 00:15:52.152 --rc geninfo_unexecuted_blocks=1 00:15:52.152 00:15:52.152 ' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=981969 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 981969' 00:15:52.152 Process pid: 981969 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 981969 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 981969 ']' 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.152 11:49:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.096 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.096 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:53.096 11:49:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.036 malloc0 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:54.036 11:49:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:26.163 Fuzzing completed. Shutting down the fuzz application 00:16:26.163 00:16:26.163 Dumping successful admin opcodes: 00:16:26.163 8, 9, 10, 24, 00:16:26.163 Dumping successful io opcodes: 00:16:26.163 0, 00:16:26.163 NS: 0x20000081ef00 I/O qp, Total commands completed: 1336734, total successful commands: 5239, random_seed: 2606185920 00:16:26.163 NS: 0x20000081ef00 admin qp, Total commands completed: 300554, total successful commands: 2416, random_seed: 2137462912 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 981969 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 981969 ']' 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 981969 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 981969 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 981969' 00:16:26.163 killing process with pid 981969 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 981969 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 981969 00:16:26.163 11:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:26.163 00:16:26.163 real 0m32.787s 00:16:26.163 user 0m37.993s 00:16:26.163 sys 0m23.493s 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:26.163 ************************************ 00:16:26.163 END TEST nvmf_vfio_user_fuzz 00:16:26.163 ************************************ 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.163 ************************************ 00:16:26.163 START TEST nvmf_auth_target 00:16:26.163 ************************************ 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:26.163 * Looking for test storage... 00:16:26.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:26.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.163 --rc genhtml_branch_coverage=1 00:16:26.163 --rc genhtml_function_coverage=1 00:16:26.163 --rc genhtml_legend=1 00:16:26.163 --rc geninfo_all_blocks=1 00:16:26.163 --rc geninfo_unexecuted_blocks=1 00:16:26.163 00:16:26.163 ' 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:26.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.163 --rc genhtml_branch_coverage=1 00:16:26.163 --rc genhtml_function_coverage=1 00:16:26.163 --rc genhtml_legend=1 00:16:26.163 --rc geninfo_all_blocks=1 00:16:26.163 --rc geninfo_unexecuted_blocks=1 00:16:26.163 00:16:26.163 ' 00:16:26.163 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:26.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.164 --rc genhtml_branch_coverage=1 00:16:26.164 --rc genhtml_function_coverage=1 00:16:26.164 --rc genhtml_legend=1 00:16:26.164 --rc geninfo_all_blocks=1 00:16:26.164 --rc geninfo_unexecuted_blocks=1 00:16:26.164 00:16:26.164 ' 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:26.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.164 --rc genhtml_branch_coverage=1 00:16:26.164 --rc genhtml_function_coverage=1 00:16:26.164 --rc genhtml_legend=1 00:16:26.164 --rc geninfo_all_blocks=1 00:16:26.164 --rc geninfo_unexecuted_blocks=1 00:16:26.164 00:16:26.164 ' 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:26.164 11:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:32.765 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:32.765 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:32.765 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:32.765 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:32.766 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:32.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:16:32.766 00:16:32.766 --- 10.0.0.2 ping statistics --- 00:16:32.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.766 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:16:32.766 00:16:32.766 --- 10.0.0.1 ping statistics --- 00:16:32.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.766 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=991946 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 991946 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 991946 ']' 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.766 11:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=992293 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=3da5c73b8159e8df7513f42287517b29a5012487558bf0a1 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.MFd 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 3da5c73b8159e8df7513f42287517b29a5012487558bf0a1 0 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 3da5c73b8159e8df7513f42287517b29a5012487558bf0a1 0 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=3da5c73b8159e8df7513f42287517b29a5012487558bf0a1 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.MFd 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.MFd 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.MFd 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=21003252619f0b2619ee5567ceab93143c4564dfb7f21c76b66bb96cb8141cab 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.wlI 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 21003252619f0b2619ee5567ceab93143c4564dfb7f21c76b66bb96cb8141cab 3 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 21003252619f0b2619ee5567ceab93143c4564dfb7f21c76b66bb96cb8141cab 3 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=21003252619f0b2619ee5567ceab93143c4564dfb7f21c76b66bb96cb8141cab 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.wlI 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.wlI 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.wlI 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=44a083409485fe98877513af2947d757 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.i8I 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 44a083409485fe98877513af2947d757 1 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 44a083409485fe98877513af2947d757 1 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=44a083409485fe98877513af2947d757 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.i8I 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.i8I 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.i8I 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4471a542d6006f457cd879c75f030bfe0a5c4254c9b198d8 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.sPb 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4471a542d6006f457cd879c75f030bfe0a5c4254c9b198d8 2 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4471a542d6006f457cd879c75f030bfe0a5c4254c9b198d8 2 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4471a542d6006f457cd879c75f030bfe0a5c4254c9b198d8 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:33.341 11:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.sPb 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.sPb 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.sPb 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=712e83d1c57c66a2fa325c9ef4eb66122c7ed538f8aba7d8 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Nkg 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 712e83d1c57c66a2fa325c9ef4eb66122c7ed538f8aba7d8 2 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 712e83d1c57c66a2fa325c9ef4eb66122c7ed538f8aba7d8 2 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=712e83d1c57c66a2fa325c9ef4eb66122c7ed538f8aba7d8 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Nkg 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Nkg 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Nkg 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c467d0e5b12cc57021021a5408039a41 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.rd4 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c467d0e5b12cc57021021a5408039a41 1 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c467d0e5b12cc57021021a5408039a41 1 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c467d0e5b12cc57021021a5408039a41 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.rd4 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.rd4 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.rd4 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=a4ddbbf333656366976c4d43e9a2ba101d920ecabada516aa43a5a7232015d81 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.jhn 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key a4ddbbf333656366976c4d43e9a2ba101d920ecabada516aa43a5a7232015d81 3 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 a4ddbbf333656366976c4d43e9a2ba101d920ecabada516aa43a5a7232015d81 3 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=a4ddbbf333656366976c4d43e9a2ba101d920ecabada516aa43a5a7232015d81 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.jhn 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.jhn 00:16:33.604 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.jhn 00:16:33.605 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:33.605 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 991946 00:16:33.605 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 991946 ']' 00:16:33.605 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.605 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.605 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.605 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.605 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.866 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:33.866 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:33.866 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 992293 /var/tmp/host.sock 00:16:33.866 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 992293 ']' 00:16:33.866 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:33.866 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:33.866 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:33.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:33.866 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:33.866 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MFd 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.MFd 00:16:34.128 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.MFd 00:16:34.392 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.wlI ]] 00:16:34.392 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wlI 00:16:34.392 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.392 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.392 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.392 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wlI 00:16:34.392 11:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wlI 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.i8I 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.i8I 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.i8I 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.sPb ]] 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sPb 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sPb 00:16:34.653 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sPb 00:16:34.914 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:34.914 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Nkg 00:16:34.914 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.914 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.914 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.914 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Nkg 00:16:34.914 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Nkg 00:16:35.175 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.rd4 ]] 00:16:35.175 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rd4 00:16:35.175 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.175 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.175 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.175 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rd4 00:16:35.175 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rd4 00:16:35.437 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:35.437 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jhn 00:16:35.437 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.437 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.437 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.437 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.jhn 00:16:35.437 11:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.jhn 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.698 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.960 00:16:35.960 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.960 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.960 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.222 { 00:16:36.222 "cntlid": 1, 00:16:36.222 "qid": 0, 00:16:36.222 "state": "enabled", 00:16:36.222 "thread": "nvmf_tgt_poll_group_000", 00:16:36.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.222 "listen_address": { 00:16:36.222 "trtype": "TCP", 00:16:36.222 "adrfam": "IPv4", 00:16:36.222 "traddr": "10.0.0.2", 00:16:36.222 "trsvcid": "4420" 00:16:36.222 }, 00:16:36.222 "peer_address": { 00:16:36.222 "trtype": "TCP", 00:16:36.222 "adrfam": "IPv4", 00:16:36.222 "traddr": "10.0.0.1", 00:16:36.222 "trsvcid": "48282" 00:16:36.222 }, 00:16:36.222 "auth": { 00:16:36.222 "state": "completed", 00:16:36.222 "digest": "sha256", 00:16:36.222 "dhgroup": "null" 00:16:36.222 } 00:16:36.222 } 00:16:36.222 ]' 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.222 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.485 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.485 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.485 11:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.485 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:16:36.485 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:16:37.057 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.057 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.057 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.057 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.057 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.057 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.057 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.057 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.318 11:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.618 00:16:37.618 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.618 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.618 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.917 { 00:16:37.917 "cntlid": 3, 00:16:37.917 "qid": 0, 00:16:37.917 "state": "enabled", 00:16:37.917 "thread": "nvmf_tgt_poll_group_000", 00:16:37.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:37.917 "listen_address": { 00:16:37.917 "trtype": "TCP", 00:16:37.917 "adrfam": "IPv4", 00:16:37.917 "traddr": "10.0.0.2", 00:16:37.917 "trsvcid": "4420" 00:16:37.917 }, 00:16:37.917 "peer_address": { 00:16:37.917 "trtype": "TCP", 00:16:37.917 "adrfam": "IPv4", 00:16:37.917 "traddr": "10.0.0.1", 00:16:37.917 "trsvcid": "48310" 00:16:37.917 }, 00:16:37.917 "auth": { 00:16:37.917 "state": "completed", 00:16:37.917 "digest": "sha256", 00:16:37.917 "dhgroup": "null" 00:16:37.917 } 00:16:37.917 } 00:16:37.917 ]' 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.917 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.199 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:16:38.199 11:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.840 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.101 00:16:39.101 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.101 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.101 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.362 { 00:16:39.362 "cntlid": 5, 00:16:39.362 "qid": 0, 00:16:39.362 "state": "enabled", 00:16:39.362 "thread": "nvmf_tgt_poll_group_000", 00:16:39.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.362 "listen_address": { 00:16:39.362 "trtype": "TCP", 00:16:39.362 "adrfam": "IPv4", 00:16:39.362 "traddr": "10.0.0.2", 00:16:39.362 "trsvcid": "4420" 00:16:39.362 }, 00:16:39.362 "peer_address": { 00:16:39.362 "trtype": "TCP", 00:16:39.362 "adrfam": "IPv4", 00:16:39.362 "traddr": "10.0.0.1", 00:16:39.362 "trsvcid": "48320" 00:16:39.362 }, 00:16:39.362 "auth": { 00:16:39.362 "state": "completed", 00:16:39.362 "digest": "sha256", 00:16:39.362 "dhgroup": "null" 00:16:39.362 } 00:16:39.362 } 00:16:39.362 ]' 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.362 11:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.622 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:16:39.623 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:16:40.193 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.193 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.193 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.193 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.193 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.193 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.193 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.193 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.453 11:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.713 00:16:40.713 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.713 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.713 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.973 { 00:16:40.973 "cntlid": 7, 00:16:40.973 "qid": 0, 00:16:40.973 "state": "enabled", 00:16:40.973 "thread": "nvmf_tgt_poll_group_000", 00:16:40.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.973 "listen_address": { 00:16:40.973 "trtype": "TCP", 00:16:40.973 "adrfam": "IPv4", 00:16:40.973 "traddr": "10.0.0.2", 00:16:40.973 "trsvcid": "4420" 00:16:40.973 }, 00:16:40.973 "peer_address": { 00:16:40.973 "trtype": "TCP", 00:16:40.973 "adrfam": "IPv4", 00:16:40.973 "traddr": "10.0.0.1", 00:16:40.973 "trsvcid": "54160" 00:16:40.973 }, 00:16:40.973 "auth": { 00:16:40.973 "state": "completed", 00:16:40.973 "digest": "sha256", 00:16:40.973 "dhgroup": "null" 00:16:40.973 } 00:16:40.973 } 00:16:40.973 ]' 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.973 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.234 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:16:41.234 11:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:16:41.805 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.805 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.805 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.805 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.805 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.805 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.805 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.805 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.805 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.065 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.327 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.327 { 00:16:42.327 "cntlid": 9, 00:16:42.327 "qid": 0, 00:16:42.327 "state": "enabled", 00:16:42.327 "thread": "nvmf_tgt_poll_group_000", 00:16:42.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.327 "listen_address": { 00:16:42.327 "trtype": "TCP", 00:16:42.327 "adrfam": "IPv4", 00:16:42.327 "traddr": "10.0.0.2", 00:16:42.327 "trsvcid": "4420" 00:16:42.327 }, 00:16:42.327 "peer_address": { 00:16:42.327 "trtype": "TCP", 00:16:42.327 "adrfam": "IPv4", 00:16:42.327 "traddr": "10.0.0.1", 00:16:42.327 "trsvcid": "54190" 00:16:42.327 }, 00:16:42.327 "auth": { 00:16:42.327 "state": "completed", 00:16:42.327 "digest": "sha256", 00:16:42.327 "dhgroup": "ffdhe2048" 00:16:42.327 } 00:16:42.327 } 00:16:42.327 ]' 00:16:42.327 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.588 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.588 11:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.588 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.588 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.588 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.588 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.588 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.848 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:16:42.848 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:16:43.419 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.420 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.420 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.420 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.420 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.420 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.420 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.420 11:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.420 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.680 00:16:43.680 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.680 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.680 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.941 { 00:16:43.941 "cntlid": 11, 00:16:43.941 "qid": 0, 00:16:43.941 "state": "enabled", 00:16:43.941 "thread": "nvmf_tgt_poll_group_000", 00:16:43.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:43.941 "listen_address": { 00:16:43.941 "trtype": "TCP", 00:16:43.941 "adrfam": "IPv4", 00:16:43.941 "traddr": "10.0.0.2", 00:16:43.941 "trsvcid": "4420" 00:16:43.941 }, 00:16:43.941 "peer_address": { 00:16:43.941 "trtype": "TCP", 00:16:43.941 "adrfam": "IPv4", 00:16:43.941 "traddr": "10.0.0.1", 00:16:43.941 "trsvcid": "54230" 00:16:43.941 }, 00:16:43.941 "auth": { 00:16:43.941 "state": "completed", 00:16:43.941 "digest": "sha256", 00:16:43.941 "dhgroup": "ffdhe2048" 00:16:43.941 } 00:16:43.941 } 00:16:43.941 ]' 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.941 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.202 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:16:44.202 11:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:16:44.773 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.773 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.773 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.773 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.773 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.773 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.773 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:44.773 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.034 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.294 00:16:45.294 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.294 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.294 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.556 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.556 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.556 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.556 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.556 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.556 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.556 { 00:16:45.556 "cntlid": 13, 00:16:45.556 "qid": 0, 00:16:45.556 "state": "enabled", 00:16:45.556 "thread": "nvmf_tgt_poll_group_000", 00:16:45.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.556 "listen_address": { 00:16:45.556 "trtype": "TCP", 00:16:45.556 "adrfam": "IPv4", 00:16:45.556 "traddr": "10.0.0.2", 00:16:45.556 "trsvcid": "4420" 00:16:45.556 }, 00:16:45.556 "peer_address": { 00:16:45.556 "trtype": "TCP", 00:16:45.556 "adrfam": "IPv4", 00:16:45.556 "traddr": "10.0.0.1", 00:16:45.556 "trsvcid": "54272" 00:16:45.556 }, 00:16:45.556 "auth": { 00:16:45.556 "state": "completed", 00:16:45.556 "digest": "sha256", 00:16:45.556 "dhgroup": "ffdhe2048" 00:16:45.556 } 00:16:45.556 } 00:16:45.556 ]' 00:16:45.556 11:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.556 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.556 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.556 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.556 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.556 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.556 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.556 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.816 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:16:45.816 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:16:46.388 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.388 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.389 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.389 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.389 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.389 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.389 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.389 11:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.649 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.910 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.910 { 00:16:46.910 "cntlid": 15, 00:16:46.910 "qid": 0, 00:16:46.910 "state": "enabled", 00:16:46.910 "thread": "nvmf_tgt_poll_group_000", 00:16:46.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.910 "listen_address": { 00:16:46.910 "trtype": "TCP", 00:16:46.910 "adrfam": "IPv4", 00:16:46.910 "traddr": "10.0.0.2", 00:16:46.910 "trsvcid": "4420" 00:16:46.910 }, 00:16:46.910 "peer_address": { 00:16:46.910 "trtype": "TCP", 00:16:46.910 "adrfam": "IPv4", 00:16:46.910 "traddr": "10.0.0.1", 00:16:46.910 "trsvcid": "54294" 00:16:46.910 }, 00:16:46.910 "auth": { 00:16:46.910 "state": "completed", 00:16:46.910 "digest": "sha256", 00:16:46.910 "dhgroup": "ffdhe2048" 00:16:46.910 } 00:16:46.910 } 00:16:46.910 ]' 00:16:46.910 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.170 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.170 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.170 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.170 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.170 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.170 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.170 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.430 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:16:47.430 11:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:16:48.000 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.000 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.000 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.000 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.000 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.000 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.000 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.000 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.000 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.261 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.261 11:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.521 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.521 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.521 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.521 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.521 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.521 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.521 { 00:16:48.521 "cntlid": 17, 00:16:48.521 "qid": 0, 00:16:48.521 "state": "enabled", 00:16:48.521 "thread": "nvmf_tgt_poll_group_000", 00:16:48.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:48.521 "listen_address": { 00:16:48.521 "trtype": "TCP", 00:16:48.521 "adrfam": "IPv4", 00:16:48.521 "traddr": "10.0.0.2", 00:16:48.521 "trsvcid": "4420" 00:16:48.521 }, 00:16:48.521 "peer_address": { 00:16:48.521 "trtype": "TCP", 00:16:48.521 "adrfam": "IPv4", 00:16:48.521 "traddr": "10.0.0.1", 00:16:48.521 "trsvcid": "54328" 00:16:48.521 }, 00:16:48.521 "auth": { 00:16:48.521 "state": "completed", 00:16:48.521 "digest": "sha256", 00:16:48.521 "dhgroup": "ffdhe3072" 00:16:48.521 } 00:16:48.521 } 00:16:48.521 ]' 00:16:48.521 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.521 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.521 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.782 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.782 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.782 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.782 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.782 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.782 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:16:48.782 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:16:49.352 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.352 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:49.352 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.352 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.612 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.612 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.612 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.612 11:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.612 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.613 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.613 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.873 00:16:49.873 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.873 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.873 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.133 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.133 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.133 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.133 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.133 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.133 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.133 { 00:16:50.133 "cntlid": 19, 00:16:50.133 "qid": 0, 00:16:50.133 "state": "enabled", 00:16:50.133 "thread": "nvmf_tgt_poll_group_000", 00:16:50.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.133 "listen_address": { 00:16:50.133 "trtype": "TCP", 00:16:50.133 "adrfam": "IPv4", 00:16:50.133 "traddr": "10.0.0.2", 00:16:50.133 "trsvcid": "4420" 00:16:50.133 }, 00:16:50.133 "peer_address": { 00:16:50.133 "trtype": "TCP", 00:16:50.133 "adrfam": "IPv4", 00:16:50.133 "traddr": "10.0.0.1", 00:16:50.133 "trsvcid": "54352" 00:16:50.133 }, 00:16:50.134 "auth": { 00:16:50.134 "state": "completed", 00:16:50.134 "digest": "sha256", 00:16:50.134 "dhgroup": "ffdhe3072" 00:16:50.134 } 00:16:50.134 } 00:16:50.134 ]' 00:16:50.134 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.134 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.134 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.134 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.134 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.134 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.134 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.134 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.394 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:16:50.394 11:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:16:50.965 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.965 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.965 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.965 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.965 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.965 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.965 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.965 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.225 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.493 00:16:51.493 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.493 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.493 11:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.755 { 00:16:51.755 "cntlid": 21, 00:16:51.755 "qid": 0, 00:16:51.755 "state": "enabled", 00:16:51.755 "thread": "nvmf_tgt_poll_group_000", 00:16:51.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.755 "listen_address": { 00:16:51.755 "trtype": "TCP", 00:16:51.755 "adrfam": "IPv4", 00:16:51.755 "traddr": "10.0.0.2", 00:16:51.755 "trsvcid": "4420" 00:16:51.755 }, 00:16:51.755 "peer_address": { 00:16:51.755 "trtype": "TCP", 00:16:51.755 "adrfam": "IPv4", 00:16:51.755 "traddr": "10.0.0.1", 00:16:51.755 "trsvcid": "41396" 00:16:51.755 }, 00:16:51.755 "auth": { 00:16:51.755 "state": "completed", 00:16:51.755 "digest": "sha256", 00:16:51.755 "dhgroup": "ffdhe3072" 00:16:51.755 } 00:16:51.755 } 00:16:51.755 ]' 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.755 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.015 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:16:52.015 11:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:16:52.585 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.585 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.585 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.585 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.585 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.585 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.585 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.585 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.845 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.846 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.106 00:16:53.106 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.106 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.106 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.106 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.106 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.106 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.106 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.106 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.366 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.366 { 00:16:53.366 "cntlid": 23, 00:16:53.366 "qid": 0, 00:16:53.366 "state": "enabled", 00:16:53.366 "thread": "nvmf_tgt_poll_group_000", 00:16:53.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.366 "listen_address": { 00:16:53.366 "trtype": "TCP", 00:16:53.366 "adrfam": "IPv4", 00:16:53.366 "traddr": "10.0.0.2", 00:16:53.366 "trsvcid": "4420" 00:16:53.366 }, 00:16:53.366 "peer_address": { 00:16:53.366 "trtype": "TCP", 00:16:53.366 "adrfam": "IPv4", 00:16:53.366 "traddr": "10.0.0.1", 00:16:53.366 "trsvcid": "41428" 00:16:53.366 }, 00:16:53.366 "auth": { 00:16:53.366 "state": "completed", 00:16:53.366 "digest": "sha256", 00:16:53.366 "dhgroup": "ffdhe3072" 00:16:53.366 } 00:16:53.366 } 00:16:53.366 ]' 00:16:53.366 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.366 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.366 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.366 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.366 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.366 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.366 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.366 11:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.626 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:16:53.626 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:16:54.194 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.194 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.194 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.194 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.194 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.194 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.194 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.194 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.194 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.455 11:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.715 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.715 { 00:16:54.715 "cntlid": 25, 00:16:54.715 "qid": 0, 00:16:54.715 "state": "enabled", 00:16:54.715 "thread": "nvmf_tgt_poll_group_000", 00:16:54.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.715 "listen_address": { 00:16:54.715 "trtype": "TCP", 00:16:54.715 "adrfam": "IPv4", 00:16:54.715 "traddr": "10.0.0.2", 00:16:54.715 "trsvcid": "4420" 00:16:54.715 }, 00:16:54.715 "peer_address": { 00:16:54.715 "trtype": "TCP", 00:16:54.715 "adrfam": "IPv4", 00:16:54.715 "traddr": "10.0.0.1", 00:16:54.715 "trsvcid": "41456" 00:16:54.715 }, 00:16:54.715 "auth": { 00:16:54.715 "state": "completed", 00:16:54.715 "digest": "sha256", 00:16:54.715 "dhgroup": "ffdhe4096" 00:16:54.715 } 00:16:54.715 } 00:16:54.715 ]' 00:16:54.715 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.977 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.977 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.977 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.977 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.977 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.977 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.977 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.238 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:16:55.238 11:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.809 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.070 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.070 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.070 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.070 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.070 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.330 { 00:16:56.330 "cntlid": 27, 00:16:56.330 "qid": 0, 00:16:56.330 "state": "enabled", 00:16:56.330 "thread": "nvmf_tgt_poll_group_000", 00:16:56.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:56.330 "listen_address": { 00:16:56.330 "trtype": "TCP", 00:16:56.330 "adrfam": "IPv4", 00:16:56.330 "traddr": "10.0.0.2", 00:16:56.330 "trsvcid": "4420" 00:16:56.330 }, 00:16:56.330 "peer_address": { 00:16:56.330 "trtype": "TCP", 00:16:56.330 "adrfam": "IPv4", 00:16:56.330 "traddr": "10.0.0.1", 00:16:56.330 "trsvcid": "41482" 00:16:56.330 }, 00:16:56.330 "auth": { 00:16:56.330 "state": "completed", 00:16:56.330 "digest": "sha256", 00:16:56.330 "dhgroup": "ffdhe4096" 00:16:56.330 } 00:16:56.330 } 00:16:56.330 ]' 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.330 11:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.591 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.591 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.591 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.591 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.591 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.853 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:16:56.853 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:16:57.424 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.424 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.424 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.424 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.424 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.424 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.424 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.424 11:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.424 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.684 00:16:57.684 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.684 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.684 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.943 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.943 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.943 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.943 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.943 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.943 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.943 { 00:16:57.943 "cntlid": 29, 00:16:57.943 "qid": 0, 00:16:57.943 "state": "enabled", 00:16:57.943 "thread": "nvmf_tgt_poll_group_000", 00:16:57.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.943 "listen_address": { 00:16:57.943 "trtype": "TCP", 00:16:57.943 "adrfam": "IPv4", 00:16:57.943 "traddr": "10.0.0.2", 00:16:57.943 "trsvcid": "4420" 00:16:57.943 }, 00:16:57.943 "peer_address": { 00:16:57.943 "trtype": "TCP", 00:16:57.943 "adrfam": "IPv4", 00:16:57.943 "traddr": "10.0.0.1", 00:16:57.943 "trsvcid": "41512" 00:16:57.943 }, 00:16:57.943 "auth": { 00:16:57.943 "state": "completed", 00:16:57.943 "digest": "sha256", 00:16:57.943 "dhgroup": "ffdhe4096" 00:16:57.943 } 00:16:57.943 } 00:16:57.943 ]' 00:16:57.943 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.943 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.943 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.203 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.203 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.203 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.203 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.203 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.203 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:16:58.203 11:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:16:58.774 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.034 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.035 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.295 00:16:59.295 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.295 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.295 11:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.554 { 00:16:59.554 "cntlid": 31, 00:16:59.554 "qid": 0, 00:16:59.554 "state": "enabled", 00:16:59.554 "thread": "nvmf_tgt_poll_group_000", 00:16:59.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.554 "listen_address": { 00:16:59.554 "trtype": "TCP", 00:16:59.554 "adrfam": "IPv4", 00:16:59.554 "traddr": "10.0.0.2", 00:16:59.554 "trsvcid": "4420" 00:16:59.554 }, 00:16:59.554 "peer_address": { 00:16:59.554 "trtype": "TCP", 00:16:59.554 "adrfam": "IPv4", 00:16:59.554 "traddr": "10.0.0.1", 00:16:59.554 "trsvcid": "41530" 00:16:59.554 }, 00:16:59.554 "auth": { 00:16:59.554 "state": "completed", 00:16:59.554 "digest": "sha256", 00:16:59.554 "dhgroup": "ffdhe4096" 00:16:59.554 } 00:16:59.554 } 00:16:59.554 ]' 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.554 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.813 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.813 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.813 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.813 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:16:59.813 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:00.418 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.418 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.418 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.418 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.418 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.418 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.418 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.418 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.418 11:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.678 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.938 00:17:00.938 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.938 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.938 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.198 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.198 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.198 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.198 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.198 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.198 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.198 { 00:17:01.198 "cntlid": 33, 00:17:01.198 "qid": 0, 00:17:01.198 "state": "enabled", 00:17:01.198 "thread": "nvmf_tgt_poll_group_000", 00:17:01.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.198 "listen_address": { 00:17:01.198 "trtype": "TCP", 00:17:01.198 "adrfam": "IPv4", 00:17:01.198 "traddr": "10.0.0.2", 00:17:01.198 "trsvcid": "4420" 00:17:01.198 }, 00:17:01.198 "peer_address": { 00:17:01.198 "trtype": "TCP", 00:17:01.198 "adrfam": "IPv4", 00:17:01.198 "traddr": "10.0.0.1", 00:17:01.198 "trsvcid": "46076" 00:17:01.198 }, 00:17:01.198 "auth": { 00:17:01.198 "state": "completed", 00:17:01.198 "digest": "sha256", 00:17:01.198 "dhgroup": "ffdhe6144" 00:17:01.198 } 00:17:01.198 } 00:17:01.198 ]' 00:17:01.199 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.199 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.199 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.199 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.199 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.457 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.457 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.457 11:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.457 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:01.457 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:02.025 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.025 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.025 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.025 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.025 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.025 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.025 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.025 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.285 11:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.545 00:17:02.545 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.545 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.545 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.805 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.805 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.805 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.805 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.805 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.805 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.805 { 00:17:02.805 "cntlid": 35, 00:17:02.805 "qid": 0, 00:17:02.805 "state": "enabled", 00:17:02.805 "thread": "nvmf_tgt_poll_group_000", 00:17:02.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.805 "listen_address": { 00:17:02.805 "trtype": "TCP", 00:17:02.805 "adrfam": "IPv4", 00:17:02.805 "traddr": "10.0.0.2", 00:17:02.805 "trsvcid": "4420" 00:17:02.805 }, 00:17:02.805 "peer_address": { 00:17:02.805 "trtype": "TCP", 00:17:02.805 "adrfam": "IPv4", 00:17:02.805 "traddr": "10.0.0.1", 00:17:02.805 "trsvcid": "46102" 00:17:02.805 }, 00:17:02.805 "auth": { 00:17:02.805 "state": "completed", 00:17:02.805 "digest": "sha256", 00:17:02.805 "dhgroup": "ffdhe6144" 00:17:02.805 } 00:17:02.805 } 00:17:02.805 ]' 00:17:02.805 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.805 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.805 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.066 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.066 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.066 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.066 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.066 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.326 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:03.326 11:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.896 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.465 00:17:04.465 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.465 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.465 11:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.465 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.465 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.465 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.465 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.465 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.465 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.465 { 00:17:04.465 "cntlid": 37, 00:17:04.465 "qid": 0, 00:17:04.465 "state": "enabled", 00:17:04.465 "thread": "nvmf_tgt_poll_group_000", 00:17:04.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:04.465 "listen_address": { 00:17:04.465 "trtype": "TCP", 00:17:04.465 "adrfam": "IPv4", 00:17:04.465 "traddr": "10.0.0.2", 00:17:04.465 "trsvcid": "4420" 00:17:04.465 }, 00:17:04.465 "peer_address": { 00:17:04.465 "trtype": "TCP", 00:17:04.465 "adrfam": "IPv4", 00:17:04.465 "traddr": "10.0.0.1", 00:17:04.465 "trsvcid": "46128" 00:17:04.465 }, 00:17:04.465 "auth": { 00:17:04.465 "state": "completed", 00:17:04.465 "digest": "sha256", 00:17:04.465 "dhgroup": "ffdhe6144" 00:17:04.465 } 00:17:04.465 } 00:17:04.465 ]' 00:17:04.465 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.465 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.465 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.725 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.725 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.725 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.725 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.725 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.986 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:04.986 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:05.556 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.556 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.556 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.556 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.556 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.556 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.556 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.556 11:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.557 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.127 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.127 { 00:17:06.127 "cntlid": 39, 00:17:06.127 "qid": 0, 00:17:06.127 "state": "enabled", 00:17:06.127 "thread": "nvmf_tgt_poll_group_000", 00:17:06.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.127 "listen_address": { 00:17:06.127 "trtype": "TCP", 00:17:06.127 "adrfam": "IPv4", 00:17:06.127 "traddr": "10.0.0.2", 00:17:06.127 "trsvcid": "4420" 00:17:06.127 }, 00:17:06.127 "peer_address": { 00:17:06.127 "trtype": "TCP", 00:17:06.127 "adrfam": "IPv4", 00:17:06.127 "traddr": "10.0.0.1", 00:17:06.127 "trsvcid": "46146" 00:17:06.127 }, 00:17:06.127 "auth": { 00:17:06.127 "state": "completed", 00:17:06.127 "digest": "sha256", 00:17:06.127 "dhgroup": "ffdhe6144" 00:17:06.127 } 00:17:06.127 } 00:17:06.127 ]' 00:17:06.127 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.387 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.387 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.387 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.387 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.387 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.387 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.387 11:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.647 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:06.648 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.219 11:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.790 00:17:07.790 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.790 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.790 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.050 { 00:17:08.050 "cntlid": 41, 00:17:08.050 "qid": 0, 00:17:08.050 "state": "enabled", 00:17:08.050 "thread": "nvmf_tgt_poll_group_000", 00:17:08.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.050 "listen_address": { 00:17:08.050 "trtype": "TCP", 00:17:08.050 "adrfam": "IPv4", 00:17:08.050 "traddr": "10.0.0.2", 00:17:08.050 "trsvcid": "4420" 00:17:08.050 }, 00:17:08.050 "peer_address": { 00:17:08.050 "trtype": "TCP", 00:17:08.050 "adrfam": "IPv4", 00:17:08.050 "traddr": "10.0.0.1", 00:17:08.050 "trsvcid": "46172" 00:17:08.050 }, 00:17:08.050 "auth": { 00:17:08.050 "state": "completed", 00:17:08.050 "digest": "sha256", 00:17:08.050 "dhgroup": "ffdhe8192" 00:17:08.050 } 00:17:08.050 } 00:17:08.050 ]' 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.050 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.311 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:08.311 11:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:08.883 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.883 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.883 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.883 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.883 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.883 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.883 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.883 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.143 11:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.715 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.715 { 00:17:09.715 "cntlid": 43, 00:17:09.715 "qid": 0, 00:17:09.715 "state": "enabled", 00:17:09.715 "thread": "nvmf_tgt_poll_group_000", 00:17:09.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:09.715 "listen_address": { 00:17:09.715 "trtype": "TCP", 00:17:09.715 "adrfam": "IPv4", 00:17:09.715 "traddr": "10.0.0.2", 00:17:09.715 "trsvcid": "4420" 00:17:09.715 }, 00:17:09.715 "peer_address": { 00:17:09.715 "trtype": "TCP", 00:17:09.715 "adrfam": "IPv4", 00:17:09.715 "traddr": "10.0.0.1", 00:17:09.715 "trsvcid": "46196" 00:17:09.715 }, 00:17:09.715 "auth": { 00:17:09.715 "state": "completed", 00:17:09.715 "digest": "sha256", 00:17:09.715 "dhgroup": "ffdhe8192" 00:17:09.715 } 00:17:09.715 } 00:17:09.715 ]' 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.715 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.976 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.976 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.976 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.976 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.976 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.976 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:09.976 11:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:10.546 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.546 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.546 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.546 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.811 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.811 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.811 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.811 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.811 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:10.811 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.811 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.811 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.812 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.812 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.812 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.812 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.812 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.812 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.812 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.812 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.812 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.384 00:17:11.384 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.384 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.384 11:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.384 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.384 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.384 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.384 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.648 { 00:17:11.648 "cntlid": 45, 00:17:11.648 "qid": 0, 00:17:11.648 "state": "enabled", 00:17:11.648 "thread": "nvmf_tgt_poll_group_000", 00:17:11.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.648 "listen_address": { 00:17:11.648 "trtype": "TCP", 00:17:11.648 "adrfam": "IPv4", 00:17:11.648 "traddr": "10.0.0.2", 00:17:11.648 "trsvcid": "4420" 00:17:11.648 }, 00:17:11.648 "peer_address": { 00:17:11.648 "trtype": "TCP", 00:17:11.648 "adrfam": "IPv4", 00:17:11.648 "traddr": "10.0.0.1", 00:17:11.648 "trsvcid": "52532" 00:17:11.648 }, 00:17:11.648 "auth": { 00:17:11.648 "state": "completed", 00:17:11.648 "digest": "sha256", 00:17:11.648 "dhgroup": "ffdhe8192" 00:17:11.648 } 00:17:11.648 } 00:17:11.648 ]' 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.648 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.908 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:11.908 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:12.479 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.479 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.479 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.479 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.479 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.479 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.479 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.479 11:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.740 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.000 00:17:13.000 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.000 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.000 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.260 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.260 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.260 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.260 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.260 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.260 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.260 { 00:17:13.260 "cntlid": 47, 00:17:13.260 "qid": 0, 00:17:13.260 "state": "enabled", 00:17:13.260 "thread": "nvmf_tgt_poll_group_000", 00:17:13.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.260 "listen_address": { 00:17:13.260 "trtype": "TCP", 00:17:13.260 "adrfam": "IPv4", 00:17:13.260 "traddr": "10.0.0.2", 00:17:13.260 "trsvcid": "4420" 00:17:13.260 }, 00:17:13.260 "peer_address": { 00:17:13.260 "trtype": "TCP", 00:17:13.260 "adrfam": "IPv4", 00:17:13.260 "traddr": "10.0.0.1", 00:17:13.260 "trsvcid": "52572" 00:17:13.260 }, 00:17:13.260 "auth": { 00:17:13.260 "state": "completed", 00:17:13.260 "digest": "sha256", 00:17:13.260 "dhgroup": "ffdhe8192" 00:17:13.260 } 00:17:13.260 } 00:17:13.260 ]' 00:17:13.260 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.260 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.260 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.521 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.521 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.521 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.521 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.521 11:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.521 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:13.521 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.462 11:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.723 00:17:14.723 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.723 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.723 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.723 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.723 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.723 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.723 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.983 { 00:17:14.983 "cntlid": 49, 00:17:14.983 "qid": 0, 00:17:14.983 "state": "enabled", 00:17:14.983 "thread": "nvmf_tgt_poll_group_000", 00:17:14.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.983 "listen_address": { 00:17:14.983 "trtype": "TCP", 00:17:14.983 "adrfam": "IPv4", 00:17:14.983 "traddr": "10.0.0.2", 00:17:14.983 "trsvcid": "4420" 00:17:14.983 }, 00:17:14.983 "peer_address": { 00:17:14.983 "trtype": "TCP", 00:17:14.983 "adrfam": "IPv4", 00:17:14.983 "traddr": "10.0.0.1", 00:17:14.983 "trsvcid": "52604" 00:17:14.983 }, 00:17:14.983 "auth": { 00:17:14.983 "state": "completed", 00:17:14.983 "digest": "sha384", 00:17:14.983 "dhgroup": "null" 00:17:14.983 } 00:17:14.983 } 00:17:14.983 ]' 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.983 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.244 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:15.244 11:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:15.814 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.814 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.814 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.814 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.814 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.814 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.814 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.814 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.074 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.074 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.334 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.334 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.334 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.334 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.334 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.334 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.334 { 00:17:16.334 "cntlid": 51, 00:17:16.334 "qid": 0, 00:17:16.334 "state": "enabled", 00:17:16.334 "thread": "nvmf_tgt_poll_group_000", 00:17:16.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.334 "listen_address": { 00:17:16.334 "trtype": "TCP", 00:17:16.334 "adrfam": "IPv4", 00:17:16.334 "traddr": "10.0.0.2", 00:17:16.334 "trsvcid": "4420" 00:17:16.334 }, 00:17:16.334 "peer_address": { 00:17:16.334 "trtype": "TCP", 00:17:16.334 "adrfam": "IPv4", 00:17:16.334 "traddr": "10.0.0.1", 00:17:16.334 "trsvcid": "52636" 00:17:16.334 }, 00:17:16.334 "auth": { 00:17:16.334 "state": "completed", 00:17:16.334 "digest": "sha384", 00:17:16.334 "dhgroup": "null" 00:17:16.334 } 00:17:16.334 } 00:17:16.334 ]' 00:17:16.334 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.334 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.334 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.595 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.595 11:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.595 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.595 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.595 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.595 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:16.595 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.535 11:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.536 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.536 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.536 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.536 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.795 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.795 { 00:17:17.795 "cntlid": 53, 00:17:17.795 "qid": 0, 00:17:17.795 "state": "enabled", 00:17:17.795 "thread": "nvmf_tgt_poll_group_000", 00:17:17.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.795 "listen_address": { 00:17:17.795 "trtype": "TCP", 00:17:17.795 "adrfam": "IPv4", 00:17:17.795 "traddr": "10.0.0.2", 00:17:17.795 "trsvcid": "4420" 00:17:17.795 }, 00:17:17.795 "peer_address": { 00:17:17.795 "trtype": "TCP", 00:17:17.795 "adrfam": "IPv4", 00:17:17.795 "traddr": "10.0.0.1", 00:17:17.795 "trsvcid": "52654" 00:17:17.795 }, 00:17:17.795 "auth": { 00:17:17.795 "state": "completed", 00:17:17.795 "digest": "sha384", 00:17:17.795 "dhgroup": "null" 00:17:17.795 } 00:17:17.795 } 00:17:17.795 ]' 00:17:17.795 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.055 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.055 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.055 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.055 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.055 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.055 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.055 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.315 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:18.315 11:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:18.885 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.885 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.885 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.885 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.885 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.885 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.885 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.885 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:18.885 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:19.146 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.146 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.146 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.146 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.146 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.146 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:19.146 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.146 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.146 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.147 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.147 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.147 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.147 00:17:19.147 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.147 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.147 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.407 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.407 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.407 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.407 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.407 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.407 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.407 { 00:17:19.407 "cntlid": 55, 00:17:19.407 "qid": 0, 00:17:19.407 "state": "enabled", 00:17:19.407 "thread": "nvmf_tgt_poll_group_000", 00:17:19.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.407 "listen_address": { 00:17:19.407 "trtype": "TCP", 00:17:19.407 "adrfam": "IPv4", 00:17:19.407 "traddr": "10.0.0.2", 00:17:19.407 "trsvcid": "4420" 00:17:19.407 }, 00:17:19.407 "peer_address": { 00:17:19.407 "trtype": "TCP", 00:17:19.407 "adrfam": "IPv4", 00:17:19.407 "traddr": "10.0.0.1", 00:17:19.407 "trsvcid": "52692" 00:17:19.407 }, 00:17:19.407 "auth": { 00:17:19.407 "state": "completed", 00:17:19.407 "digest": "sha384", 00:17:19.407 "dhgroup": "null" 00:17:19.407 } 00:17:19.407 } 00:17:19.407 ]' 00:17:19.407 11:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.407 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.407 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.667 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:19.667 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.667 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.667 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.667 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.667 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:19.667 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:20.236 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.236 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.236 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.236 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.497 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.497 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.497 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.497 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.497 11:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.497 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.758 00:17:20.758 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.758 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.758 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.018 { 00:17:21.018 "cntlid": 57, 00:17:21.018 "qid": 0, 00:17:21.018 "state": "enabled", 00:17:21.018 "thread": "nvmf_tgt_poll_group_000", 00:17:21.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.018 "listen_address": { 00:17:21.018 "trtype": "TCP", 00:17:21.018 "adrfam": "IPv4", 00:17:21.018 "traddr": "10.0.0.2", 00:17:21.018 "trsvcid": "4420" 00:17:21.018 }, 00:17:21.018 "peer_address": { 00:17:21.018 "trtype": "TCP", 00:17:21.018 "adrfam": "IPv4", 00:17:21.018 "traddr": "10.0.0.1", 00:17:21.018 "trsvcid": "47862" 00:17:21.018 }, 00:17:21.018 "auth": { 00:17:21.018 "state": "completed", 00:17:21.018 "digest": "sha384", 00:17:21.018 "dhgroup": "ffdhe2048" 00:17:21.018 } 00:17:21.018 } 00:17:21.018 ]' 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.018 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.019 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.280 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:21.280 11:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:21.850 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.850 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.850 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.850 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.850 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.850 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.850 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.850 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.111 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.371 00:17:22.371 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.371 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.371 11:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.632 { 00:17:22.632 "cntlid": 59, 00:17:22.632 "qid": 0, 00:17:22.632 "state": "enabled", 00:17:22.632 "thread": "nvmf_tgt_poll_group_000", 00:17:22.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.632 "listen_address": { 00:17:22.632 "trtype": "TCP", 00:17:22.632 "adrfam": "IPv4", 00:17:22.632 "traddr": "10.0.0.2", 00:17:22.632 "trsvcid": "4420" 00:17:22.632 }, 00:17:22.632 "peer_address": { 00:17:22.632 "trtype": "TCP", 00:17:22.632 "adrfam": "IPv4", 00:17:22.632 "traddr": "10.0.0.1", 00:17:22.632 "trsvcid": "47884" 00:17:22.632 }, 00:17:22.632 "auth": { 00:17:22.632 "state": "completed", 00:17:22.632 "digest": "sha384", 00:17:22.632 "dhgroup": "ffdhe2048" 00:17:22.632 } 00:17:22.632 } 00:17:22.632 ]' 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.632 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.894 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:22.894 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:23.465 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.465 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.465 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.465 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.465 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.465 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.465 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.466 11:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.726 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.987 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.987 { 00:17:23.987 "cntlid": 61, 00:17:23.987 "qid": 0, 00:17:23.987 "state": "enabled", 00:17:23.987 "thread": "nvmf_tgt_poll_group_000", 00:17:23.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.987 "listen_address": { 00:17:23.987 "trtype": "TCP", 00:17:23.987 "adrfam": "IPv4", 00:17:23.987 "traddr": "10.0.0.2", 00:17:23.987 "trsvcid": "4420" 00:17:23.987 }, 00:17:23.987 "peer_address": { 00:17:23.987 "trtype": "TCP", 00:17:23.987 "adrfam": "IPv4", 00:17:23.987 "traddr": "10.0.0.1", 00:17:23.987 "trsvcid": "47904" 00:17:23.987 }, 00:17:23.987 "auth": { 00:17:23.987 "state": "completed", 00:17:23.987 "digest": "sha384", 00:17:23.987 "dhgroup": "ffdhe2048" 00:17:23.987 } 00:17:23.987 } 00:17:23.987 ]' 00:17:23.987 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.248 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.248 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.248 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.248 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.248 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.248 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.248 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.509 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:24.509 11:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.080 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.340 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.340 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.340 00:17:25.340 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.340 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.340 11:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.601 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.601 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.601 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.601 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.601 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.601 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.601 { 00:17:25.601 "cntlid": 63, 00:17:25.601 "qid": 0, 00:17:25.601 "state": "enabled", 00:17:25.601 "thread": "nvmf_tgt_poll_group_000", 00:17:25.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.601 "listen_address": { 00:17:25.601 "trtype": "TCP", 00:17:25.601 "adrfam": "IPv4", 00:17:25.601 "traddr": "10.0.0.2", 00:17:25.601 "trsvcid": "4420" 00:17:25.601 }, 00:17:25.601 "peer_address": { 00:17:25.601 "trtype": "TCP", 00:17:25.601 "adrfam": "IPv4", 00:17:25.601 "traddr": "10.0.0.1", 00:17:25.601 "trsvcid": "47940" 00:17:25.601 }, 00:17:25.601 "auth": { 00:17:25.601 "state": "completed", 00:17:25.601 "digest": "sha384", 00:17:25.601 "dhgroup": "ffdhe2048" 00:17:25.601 } 00:17:25.601 } 00:17:25.601 ]' 00:17:25.601 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.601 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.601 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.862 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.862 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.862 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.862 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.862 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.862 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:25.862 11:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:26.433 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.433 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.433 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.693 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.954 00:17:26.954 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.954 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.954 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.215 { 00:17:27.215 "cntlid": 65, 00:17:27.215 "qid": 0, 00:17:27.215 "state": "enabled", 00:17:27.215 "thread": "nvmf_tgt_poll_group_000", 00:17:27.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.215 "listen_address": { 00:17:27.215 "trtype": "TCP", 00:17:27.215 "adrfam": "IPv4", 00:17:27.215 "traddr": "10.0.0.2", 00:17:27.215 "trsvcid": "4420" 00:17:27.215 }, 00:17:27.215 "peer_address": { 00:17:27.215 "trtype": "TCP", 00:17:27.215 "adrfam": "IPv4", 00:17:27.215 "traddr": "10.0.0.1", 00:17:27.215 "trsvcid": "47962" 00:17:27.215 }, 00:17:27.215 "auth": { 00:17:27.215 "state": "completed", 00:17:27.215 "digest": "sha384", 00:17:27.215 "dhgroup": "ffdhe3072" 00:17:27.215 } 00:17:27.215 } 00:17:27.215 ]' 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.215 11:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.475 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:27.475 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:28.046 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.046 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.046 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.046 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.046 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.046 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.046 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.046 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.306 11:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.566 00:17:28.566 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.566 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.566 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.827 { 00:17:28.827 "cntlid": 67, 00:17:28.827 "qid": 0, 00:17:28.827 "state": "enabled", 00:17:28.827 "thread": "nvmf_tgt_poll_group_000", 00:17:28.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.827 "listen_address": { 00:17:28.827 "trtype": "TCP", 00:17:28.827 "adrfam": "IPv4", 00:17:28.827 "traddr": "10.0.0.2", 00:17:28.827 "trsvcid": "4420" 00:17:28.827 }, 00:17:28.827 "peer_address": { 00:17:28.827 "trtype": "TCP", 00:17:28.827 "adrfam": "IPv4", 00:17:28.827 "traddr": "10.0.0.1", 00:17:28.827 "trsvcid": "47990" 00:17:28.827 }, 00:17:28.827 "auth": { 00:17:28.827 "state": "completed", 00:17:28.827 "digest": "sha384", 00:17:28.827 "dhgroup": "ffdhe3072" 00:17:28.827 } 00:17:28.827 } 00:17:28.827 ]' 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.827 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.086 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:29.086 11:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:29.657 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.657 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.657 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.657 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.657 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.657 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.657 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.657 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.918 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.179 00:17:30.179 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.179 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.179 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.179 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.179 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.179 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.179 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.439 { 00:17:30.439 "cntlid": 69, 00:17:30.439 "qid": 0, 00:17:30.439 "state": "enabled", 00:17:30.439 "thread": "nvmf_tgt_poll_group_000", 00:17:30.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.439 "listen_address": { 00:17:30.439 "trtype": "TCP", 00:17:30.439 "adrfam": "IPv4", 00:17:30.439 "traddr": "10.0.0.2", 00:17:30.439 "trsvcid": "4420" 00:17:30.439 }, 00:17:30.439 "peer_address": { 00:17:30.439 "trtype": "TCP", 00:17:30.439 "adrfam": "IPv4", 00:17:30.439 "traddr": "10.0.0.1", 00:17:30.439 "trsvcid": "48024" 00:17:30.439 }, 00:17:30.439 "auth": { 00:17:30.439 "state": "completed", 00:17:30.439 "digest": "sha384", 00:17:30.439 "dhgroup": "ffdhe3072" 00:17:30.439 } 00:17:30.439 } 00:17:30.439 ]' 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.439 11:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.699 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:30.699 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:31.268 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.268 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.268 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.268 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.268 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.268 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.268 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.268 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.528 11:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.789 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.789 { 00:17:31.789 "cntlid": 71, 00:17:31.789 "qid": 0, 00:17:31.789 "state": "enabled", 00:17:31.789 "thread": "nvmf_tgt_poll_group_000", 00:17:31.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.789 "listen_address": { 00:17:31.789 "trtype": "TCP", 00:17:31.789 "adrfam": "IPv4", 00:17:31.789 "traddr": "10.0.0.2", 00:17:31.789 "trsvcid": "4420" 00:17:31.789 }, 00:17:31.789 "peer_address": { 00:17:31.789 "trtype": "TCP", 00:17:31.789 "adrfam": "IPv4", 00:17:31.789 "traddr": "10.0.0.1", 00:17:31.789 "trsvcid": "33276" 00:17:31.789 }, 00:17:31.789 "auth": { 00:17:31.789 "state": "completed", 00:17:31.789 "digest": "sha384", 00:17:31.789 "dhgroup": "ffdhe3072" 00:17:31.789 } 00:17:31.789 } 00:17:31.789 ]' 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.789 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.050 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.050 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.050 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.050 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.050 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.050 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.050 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:32.050 11:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:32.642 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.925 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.227 00:17:33.227 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.227 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.227 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.503 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.503 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.503 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.503 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.503 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.503 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.503 { 00:17:33.503 "cntlid": 73, 00:17:33.503 "qid": 0, 00:17:33.503 "state": "enabled", 00:17:33.503 "thread": "nvmf_tgt_poll_group_000", 00:17:33.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.503 "listen_address": { 00:17:33.503 "trtype": "TCP", 00:17:33.503 "adrfam": "IPv4", 00:17:33.503 "traddr": "10.0.0.2", 00:17:33.503 "trsvcid": "4420" 00:17:33.503 }, 00:17:33.503 "peer_address": { 00:17:33.503 "trtype": "TCP", 00:17:33.503 "adrfam": "IPv4", 00:17:33.503 "traddr": "10.0.0.1", 00:17:33.503 "trsvcid": "33316" 00:17:33.503 }, 00:17:33.503 "auth": { 00:17:33.503 "state": "completed", 00:17:33.503 "digest": "sha384", 00:17:33.503 "dhgroup": "ffdhe4096" 00:17:33.503 } 00:17:33.503 } 00:17:33.503 ]' 00:17:33.503 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.503 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.503 11:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.503 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.503 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.503 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.503 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.504 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.763 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:33.763 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:34.332 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.332 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.333 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.333 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.333 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.333 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.333 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.333 11:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.592 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.593 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.853 00:17:34.853 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.853 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.853 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.853 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.853 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.853 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.853 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.113 { 00:17:35.113 "cntlid": 75, 00:17:35.113 "qid": 0, 00:17:35.113 "state": "enabled", 00:17:35.113 "thread": "nvmf_tgt_poll_group_000", 00:17:35.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.113 "listen_address": { 00:17:35.113 "trtype": "TCP", 00:17:35.113 "adrfam": "IPv4", 00:17:35.113 "traddr": "10.0.0.2", 00:17:35.113 "trsvcid": "4420" 00:17:35.113 }, 00:17:35.113 "peer_address": { 00:17:35.113 "trtype": "TCP", 00:17:35.113 "adrfam": "IPv4", 00:17:35.113 "traddr": "10.0.0.1", 00:17:35.113 "trsvcid": "33336" 00:17:35.113 }, 00:17:35.113 "auth": { 00:17:35.113 "state": "completed", 00:17:35.113 "digest": "sha384", 00:17:35.113 "dhgroup": "ffdhe4096" 00:17:35.113 } 00:17:35.113 } 00:17:35.113 ]' 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.113 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.375 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:35.375 11:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:35.945 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.945 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.945 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.945 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.945 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.945 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.945 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.945 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.205 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.466 00:17:36.466 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.466 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.466 11:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.466 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.466 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.466 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.466 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.466 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.466 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.466 { 00:17:36.466 "cntlid": 77, 00:17:36.466 "qid": 0, 00:17:36.466 "state": "enabled", 00:17:36.466 "thread": "nvmf_tgt_poll_group_000", 00:17:36.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.466 "listen_address": { 00:17:36.466 "trtype": "TCP", 00:17:36.466 "adrfam": "IPv4", 00:17:36.466 "traddr": "10.0.0.2", 00:17:36.466 "trsvcid": "4420" 00:17:36.466 }, 00:17:36.466 "peer_address": { 00:17:36.466 "trtype": "TCP", 00:17:36.466 "adrfam": "IPv4", 00:17:36.466 "traddr": "10.0.0.1", 00:17:36.466 "trsvcid": "33368" 00:17:36.466 }, 00:17:36.466 "auth": { 00:17:36.466 "state": "completed", 00:17:36.466 "digest": "sha384", 00:17:36.466 "dhgroup": "ffdhe4096" 00:17:36.466 } 00:17:36.466 } 00:17:36.466 ]' 00:17:36.466 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.726 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.726 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.726 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.726 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.726 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.726 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.726 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.726 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:36.726 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:37.666 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.666 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.666 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.666 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.666 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.666 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.666 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.666 11:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.666 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:37.666 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.666 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.666 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.666 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.666 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.667 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:37.667 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.667 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.667 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.667 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.667 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.667 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.926 00:17:37.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.926 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.186 { 00:17:38.186 "cntlid": 79, 00:17:38.186 "qid": 0, 00:17:38.186 "state": "enabled", 00:17:38.186 "thread": "nvmf_tgt_poll_group_000", 00:17:38.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.186 "listen_address": { 00:17:38.186 "trtype": "TCP", 00:17:38.186 "adrfam": "IPv4", 00:17:38.186 "traddr": "10.0.0.2", 00:17:38.186 "trsvcid": "4420" 00:17:38.186 }, 00:17:38.186 "peer_address": { 00:17:38.186 "trtype": "TCP", 00:17:38.186 "adrfam": "IPv4", 00:17:38.186 "traddr": "10.0.0.1", 00:17:38.186 "trsvcid": "33396" 00:17:38.186 }, 00:17:38.186 "auth": { 00:17:38.186 "state": "completed", 00:17:38.186 "digest": "sha384", 00:17:38.186 "dhgroup": "ffdhe4096" 00:17:38.186 } 00:17:38.186 } 00:17:38.186 ]' 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.186 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.446 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:38.446 11:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:39.017 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.017 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.017 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.017 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.017 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.017 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.017 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.017 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.017 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.278 11:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.538 00:17:39.538 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.538 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.538 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.798 { 00:17:39.798 "cntlid": 81, 00:17:39.798 "qid": 0, 00:17:39.798 "state": "enabled", 00:17:39.798 "thread": "nvmf_tgt_poll_group_000", 00:17:39.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.798 "listen_address": { 00:17:39.798 "trtype": "TCP", 00:17:39.798 "adrfam": "IPv4", 00:17:39.798 "traddr": "10.0.0.2", 00:17:39.798 "trsvcid": "4420" 00:17:39.798 }, 00:17:39.798 "peer_address": { 00:17:39.798 "trtype": "TCP", 00:17:39.798 "adrfam": "IPv4", 00:17:39.798 "traddr": "10.0.0.1", 00:17:39.798 "trsvcid": "33432" 00:17:39.798 }, 00:17:39.798 "auth": { 00:17:39.798 "state": "completed", 00:17:39.798 "digest": "sha384", 00:17:39.798 "dhgroup": "ffdhe6144" 00:17:39.798 } 00:17:39.798 } 00:17:39.798 ]' 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.798 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.058 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:40.058 11:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:40.628 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.628 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.628 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.628 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.628 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.628 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.628 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.628 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.888 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.149 00:17:41.149 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.149 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.149 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.410 { 00:17:41.410 "cntlid": 83, 00:17:41.410 "qid": 0, 00:17:41.410 "state": "enabled", 00:17:41.410 "thread": "nvmf_tgt_poll_group_000", 00:17:41.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.410 "listen_address": { 00:17:41.410 "trtype": "TCP", 00:17:41.410 "adrfam": "IPv4", 00:17:41.410 "traddr": "10.0.0.2", 00:17:41.410 "trsvcid": "4420" 00:17:41.410 }, 00:17:41.410 "peer_address": { 00:17:41.410 "trtype": "TCP", 00:17:41.410 "adrfam": "IPv4", 00:17:41.410 "traddr": "10.0.0.1", 00:17:41.410 "trsvcid": "39080" 00:17:41.410 }, 00:17:41.410 "auth": { 00:17:41.410 "state": "completed", 00:17:41.410 "digest": "sha384", 00:17:41.410 "dhgroup": "ffdhe6144" 00:17:41.410 } 00:17:41.410 } 00:17:41.410 ]' 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.410 11:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.410 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.410 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.410 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.670 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:41.671 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:42.241 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.241 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.241 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.241 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.241 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.241 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.241 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.241 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.501 11:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.761 00:17:42.761 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.761 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.761 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.021 { 00:17:43.021 "cntlid": 85, 00:17:43.021 "qid": 0, 00:17:43.021 "state": "enabled", 00:17:43.021 "thread": "nvmf_tgt_poll_group_000", 00:17:43.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.021 "listen_address": { 00:17:43.021 "trtype": "TCP", 00:17:43.021 "adrfam": "IPv4", 00:17:43.021 "traddr": "10.0.0.2", 00:17:43.021 "trsvcid": "4420" 00:17:43.021 }, 00:17:43.021 "peer_address": { 00:17:43.021 "trtype": "TCP", 00:17:43.021 "adrfam": "IPv4", 00:17:43.021 "traddr": "10.0.0.1", 00:17:43.021 "trsvcid": "39108" 00:17:43.021 }, 00:17:43.021 "auth": { 00:17:43.021 "state": "completed", 00:17:43.021 "digest": "sha384", 00:17:43.021 "dhgroup": "ffdhe6144" 00:17:43.021 } 00:17:43.021 } 00:17:43.021 ]' 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.021 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.282 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.282 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.282 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.282 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:43.282 11:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:43.852 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.852 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.852 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.852 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.113 11:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.373 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.634 { 00:17:44.634 "cntlid": 87, 00:17:44.634 "qid": 0, 00:17:44.634 "state": "enabled", 00:17:44.634 "thread": "nvmf_tgt_poll_group_000", 00:17:44.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.634 "listen_address": { 00:17:44.634 "trtype": "TCP", 00:17:44.634 "adrfam": "IPv4", 00:17:44.634 "traddr": "10.0.0.2", 00:17:44.634 "trsvcid": "4420" 00:17:44.634 }, 00:17:44.634 "peer_address": { 00:17:44.634 "trtype": "TCP", 00:17:44.634 "adrfam": "IPv4", 00:17:44.634 "traddr": "10.0.0.1", 00:17:44.634 "trsvcid": "39140" 00:17:44.634 }, 00:17:44.634 "auth": { 00:17:44.634 "state": "completed", 00:17:44.634 "digest": "sha384", 00:17:44.634 "dhgroup": "ffdhe6144" 00:17:44.634 } 00:17:44.634 } 00:17:44.634 ]' 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.634 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.895 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.895 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.895 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.895 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.895 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.895 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:44.895 11:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.837 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.406 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.406 { 00:17:46.406 "cntlid": 89, 00:17:46.406 "qid": 0, 00:17:46.406 "state": "enabled", 00:17:46.406 "thread": "nvmf_tgt_poll_group_000", 00:17:46.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.406 "listen_address": { 00:17:46.406 "trtype": "TCP", 00:17:46.406 "adrfam": "IPv4", 00:17:46.406 "traddr": "10.0.0.2", 00:17:46.406 "trsvcid": "4420" 00:17:46.406 }, 00:17:46.406 "peer_address": { 00:17:46.406 "trtype": "TCP", 00:17:46.406 "adrfam": "IPv4", 00:17:46.406 "traddr": "10.0.0.1", 00:17:46.406 "trsvcid": "39162" 00:17:46.406 }, 00:17:46.406 "auth": { 00:17:46.406 "state": "completed", 00:17:46.406 "digest": "sha384", 00:17:46.406 "dhgroup": "ffdhe8192" 00:17:46.406 } 00:17:46.406 } 00:17:46.406 ]' 00:17:46.406 11:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.406 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.406 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.666 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.666 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.666 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.666 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.666 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.927 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:46.927 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:47.498 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.498 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.498 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.498 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.498 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.498 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.498 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.498 11:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.498 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:47.498 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.498 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.498 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:47.498 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.498 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.498 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.499 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.499 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.499 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.499 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.499 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.499 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.068 00:17:48.068 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.068 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.068 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.328 { 00:17:48.328 "cntlid": 91, 00:17:48.328 "qid": 0, 00:17:48.328 "state": "enabled", 00:17:48.328 "thread": "nvmf_tgt_poll_group_000", 00:17:48.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.328 "listen_address": { 00:17:48.328 "trtype": "TCP", 00:17:48.328 "adrfam": "IPv4", 00:17:48.328 "traddr": "10.0.0.2", 00:17:48.328 "trsvcid": "4420" 00:17:48.328 }, 00:17:48.328 "peer_address": { 00:17:48.328 "trtype": "TCP", 00:17:48.328 "adrfam": "IPv4", 00:17:48.328 "traddr": "10.0.0.1", 00:17:48.328 "trsvcid": "39174" 00:17:48.328 }, 00:17:48.328 "auth": { 00:17:48.328 "state": "completed", 00:17:48.328 "digest": "sha384", 00:17:48.328 "dhgroup": "ffdhe8192" 00:17:48.328 } 00:17:48.328 } 00:17:48.328 ]' 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.328 11:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.588 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:48.588 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:49.159 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.159 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.159 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.159 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.159 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.159 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.159 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.159 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.420 11:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.680 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.940 { 00:17:49.940 "cntlid": 93, 00:17:49.940 "qid": 0, 00:17:49.940 "state": "enabled", 00:17:49.940 "thread": "nvmf_tgt_poll_group_000", 00:17:49.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.940 "listen_address": { 00:17:49.940 "trtype": "TCP", 00:17:49.940 "adrfam": "IPv4", 00:17:49.940 "traddr": "10.0.0.2", 00:17:49.940 "trsvcid": "4420" 00:17:49.940 }, 00:17:49.940 "peer_address": { 00:17:49.940 "trtype": "TCP", 00:17:49.940 "adrfam": "IPv4", 00:17:49.940 "traddr": "10.0.0.1", 00:17:49.940 "trsvcid": "39210" 00:17:49.940 }, 00:17:49.940 "auth": { 00:17:49.940 "state": "completed", 00:17:49.940 "digest": "sha384", 00:17:49.940 "dhgroup": "ffdhe8192" 00:17:49.940 } 00:17:49.940 } 00:17:49.940 ]' 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.940 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.200 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.200 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.200 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.200 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.200 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.200 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:50.200 11:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:50.770 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.031 11:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.602 00:17:51.602 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.602 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.602 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.861 { 00:17:51.861 "cntlid": 95, 00:17:51.861 "qid": 0, 00:17:51.861 "state": "enabled", 00:17:51.861 "thread": "nvmf_tgt_poll_group_000", 00:17:51.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.861 "listen_address": { 00:17:51.861 "trtype": "TCP", 00:17:51.861 "adrfam": "IPv4", 00:17:51.861 "traddr": "10.0.0.2", 00:17:51.861 "trsvcid": "4420" 00:17:51.861 }, 00:17:51.861 "peer_address": { 00:17:51.861 "trtype": "TCP", 00:17:51.861 "adrfam": "IPv4", 00:17:51.861 "traddr": "10.0.0.1", 00:17:51.861 "trsvcid": "40688" 00:17:51.861 }, 00:17:51.861 "auth": { 00:17:51.861 "state": "completed", 00:17:51.861 "digest": "sha384", 00:17:51.861 "dhgroup": "ffdhe8192" 00:17:51.861 } 00:17:51.861 } 00:17:51.861 ]' 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.861 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.119 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:52.119 11:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.689 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.949 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.950 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.950 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.950 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.210 00:17:53.211 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.211 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.211 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.211 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.211 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.211 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.211 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.471 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.471 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.471 { 00:17:53.471 "cntlid": 97, 00:17:53.471 "qid": 0, 00:17:53.471 "state": "enabled", 00:17:53.471 "thread": "nvmf_tgt_poll_group_000", 00:17:53.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.471 "listen_address": { 00:17:53.471 "trtype": "TCP", 00:17:53.471 "adrfam": "IPv4", 00:17:53.471 "traddr": "10.0.0.2", 00:17:53.471 "trsvcid": "4420" 00:17:53.471 }, 00:17:53.471 "peer_address": { 00:17:53.471 "trtype": "TCP", 00:17:53.471 "adrfam": "IPv4", 00:17:53.471 "traddr": "10.0.0.1", 00:17:53.471 "trsvcid": "40722" 00:17:53.471 }, 00:17:53.471 "auth": { 00:17:53.471 "state": "completed", 00:17:53.471 "digest": "sha512", 00:17:53.471 "dhgroup": "null" 00:17:53.471 } 00:17:53.471 } 00:17:53.471 ]' 00:17:53.471 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.471 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.471 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.471 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:53.471 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.471 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.472 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.472 11:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.731 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:53.731 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:54.301 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.301 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.302 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.302 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.302 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.302 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.302 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.302 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.562 11:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.562 00:17:54.562 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.562 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.562 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.822 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.822 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.822 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.822 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.822 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.822 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.822 { 00:17:54.822 "cntlid": 99, 00:17:54.822 "qid": 0, 00:17:54.822 "state": "enabled", 00:17:54.822 "thread": "nvmf_tgt_poll_group_000", 00:17:54.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.822 "listen_address": { 00:17:54.822 "trtype": "TCP", 00:17:54.822 "adrfam": "IPv4", 00:17:54.822 "traddr": "10.0.0.2", 00:17:54.822 "trsvcid": "4420" 00:17:54.822 }, 00:17:54.822 "peer_address": { 00:17:54.822 "trtype": "TCP", 00:17:54.822 "adrfam": "IPv4", 00:17:54.822 "traddr": "10.0.0.1", 00:17:54.822 "trsvcid": "40744" 00:17:54.822 }, 00:17:54.822 "auth": { 00:17:54.822 "state": "completed", 00:17:54.822 "digest": "sha512", 00:17:54.822 "dhgroup": "null" 00:17:54.822 } 00:17:54.822 } 00:17:54.822 ]' 00:17:54.822 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.822 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.822 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.083 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:55.083 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.083 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.083 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.083 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.343 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:55.343 11:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.914 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.174 00:17:56.174 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.174 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.174 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.433 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.433 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.433 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.433 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.433 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.433 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.433 { 00:17:56.433 "cntlid": 101, 00:17:56.433 "qid": 0, 00:17:56.433 "state": "enabled", 00:17:56.433 "thread": "nvmf_tgt_poll_group_000", 00:17:56.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.433 "listen_address": { 00:17:56.433 "trtype": "TCP", 00:17:56.433 "adrfam": "IPv4", 00:17:56.433 "traddr": "10.0.0.2", 00:17:56.433 "trsvcid": "4420" 00:17:56.433 }, 00:17:56.433 "peer_address": { 00:17:56.433 "trtype": "TCP", 00:17:56.433 "adrfam": "IPv4", 00:17:56.433 "traddr": "10.0.0.1", 00:17:56.433 "trsvcid": "40760" 00:17:56.433 }, 00:17:56.433 "auth": { 00:17:56.433 "state": "completed", 00:17:56.433 "digest": "sha512", 00:17:56.433 "dhgroup": "null" 00:17:56.433 } 00:17:56.433 } 00:17:56.433 ]' 00:17:56.433 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.433 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.433 11:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.433 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:56.433 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.693 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.693 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.693 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.693 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:56.693 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:17:57.264 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.264 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.264 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.264 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.264 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.264 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.264 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.264 11:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.525 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.786 00:17:57.786 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.786 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.786 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.047 { 00:17:58.047 "cntlid": 103, 00:17:58.047 "qid": 0, 00:17:58.047 "state": "enabled", 00:17:58.047 "thread": "nvmf_tgt_poll_group_000", 00:17:58.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.047 "listen_address": { 00:17:58.047 "trtype": "TCP", 00:17:58.047 "adrfam": "IPv4", 00:17:58.047 "traddr": "10.0.0.2", 00:17:58.047 "trsvcid": "4420" 00:17:58.047 }, 00:17:58.047 "peer_address": { 00:17:58.047 "trtype": "TCP", 00:17:58.047 "adrfam": "IPv4", 00:17:58.047 "traddr": "10.0.0.1", 00:17:58.047 "trsvcid": "40800" 00:17:58.047 }, 00:17:58.047 "auth": { 00:17:58.047 "state": "completed", 00:17:58.047 "digest": "sha512", 00:17:58.047 "dhgroup": "null" 00:17:58.047 } 00:17:58.047 } 00:17:58.047 ]' 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.047 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.307 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:58.307 11:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:17:58.877 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.877 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.877 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.877 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.877 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.877 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.877 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.877 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:58.877 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.138 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.400 00:17:59.400 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.400 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.400 11:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.400 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.400 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.400 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.400 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.661 { 00:17:59.661 "cntlid": 105, 00:17:59.661 "qid": 0, 00:17:59.661 "state": "enabled", 00:17:59.661 "thread": "nvmf_tgt_poll_group_000", 00:17:59.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.661 "listen_address": { 00:17:59.661 "trtype": "TCP", 00:17:59.661 "adrfam": "IPv4", 00:17:59.661 "traddr": "10.0.0.2", 00:17:59.661 "trsvcid": "4420" 00:17:59.661 }, 00:17:59.661 "peer_address": { 00:17:59.661 "trtype": "TCP", 00:17:59.661 "adrfam": "IPv4", 00:17:59.661 "traddr": "10.0.0.1", 00:17:59.661 "trsvcid": "40824" 00:17:59.661 }, 00:17:59.661 "auth": { 00:17:59.661 "state": "completed", 00:17:59.661 "digest": "sha512", 00:17:59.661 "dhgroup": "ffdhe2048" 00:17:59.661 } 00:17:59.661 } 00:17:59.661 ]' 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.661 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.921 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:17:59.921 11:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:00.491 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.491 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.491 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.491 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.491 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.491 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.491 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.491 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.752 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.012 00:18:01.012 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.012 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.012 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.012 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.012 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.012 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.012 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.272 { 00:18:01.272 "cntlid": 107, 00:18:01.272 "qid": 0, 00:18:01.272 "state": "enabled", 00:18:01.272 "thread": "nvmf_tgt_poll_group_000", 00:18:01.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.272 "listen_address": { 00:18:01.272 "trtype": "TCP", 00:18:01.272 "adrfam": "IPv4", 00:18:01.272 "traddr": "10.0.0.2", 00:18:01.272 "trsvcid": "4420" 00:18:01.272 }, 00:18:01.272 "peer_address": { 00:18:01.272 "trtype": "TCP", 00:18:01.272 "adrfam": "IPv4", 00:18:01.272 "traddr": "10.0.0.1", 00:18:01.272 "trsvcid": "51014" 00:18:01.272 }, 00:18:01.272 "auth": { 00:18:01.272 "state": "completed", 00:18:01.272 "digest": "sha512", 00:18:01.272 "dhgroup": "ffdhe2048" 00:18:01.272 } 00:18:01.272 } 00:18:01.272 ]' 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.272 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.532 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:01.532 11:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:02.101 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.101 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.101 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.101 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.101 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.101 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.101 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.101 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.361 11:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.621 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.621 { 00:18:02.621 "cntlid": 109, 00:18:02.621 "qid": 0, 00:18:02.621 "state": "enabled", 00:18:02.621 "thread": "nvmf_tgt_poll_group_000", 00:18:02.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.621 "listen_address": { 00:18:02.621 "trtype": "TCP", 00:18:02.621 "adrfam": "IPv4", 00:18:02.621 "traddr": "10.0.0.2", 00:18:02.621 "trsvcid": "4420" 00:18:02.621 }, 00:18:02.621 "peer_address": { 00:18:02.621 "trtype": "TCP", 00:18:02.621 "adrfam": "IPv4", 00:18:02.621 "traddr": "10.0.0.1", 00:18:02.621 "trsvcid": "51040" 00:18:02.621 }, 00:18:02.621 "auth": { 00:18:02.621 "state": "completed", 00:18:02.621 "digest": "sha512", 00:18:02.621 "dhgroup": "ffdhe2048" 00:18:02.621 } 00:18:02.621 } 00:18:02.621 ]' 00:18:02.621 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.882 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.882 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.882 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.882 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.882 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.882 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.882 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.142 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:03.142 11:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.712 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.972 00:18:03.972 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.972 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.972 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.232 { 00:18:04.232 "cntlid": 111, 00:18:04.232 "qid": 0, 00:18:04.232 "state": "enabled", 00:18:04.232 "thread": "nvmf_tgt_poll_group_000", 00:18:04.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.232 "listen_address": { 00:18:04.232 "trtype": "TCP", 00:18:04.232 "adrfam": "IPv4", 00:18:04.232 "traddr": "10.0.0.2", 00:18:04.232 "trsvcid": "4420" 00:18:04.232 }, 00:18:04.232 "peer_address": { 00:18:04.232 "trtype": "TCP", 00:18:04.232 "adrfam": "IPv4", 00:18:04.232 "traddr": "10.0.0.1", 00:18:04.232 "trsvcid": "51060" 00:18:04.232 }, 00:18:04.232 "auth": { 00:18:04.232 "state": "completed", 00:18:04.232 "digest": "sha512", 00:18:04.232 "dhgroup": "ffdhe2048" 00:18:04.232 } 00:18:04.232 } 00:18:04.232 ]' 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.232 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.493 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.493 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.493 11:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.493 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:04.493 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:05.062 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.062 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.062 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.062 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.062 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.062 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.062 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.062 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.062 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.322 11:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.582 00:18:05.582 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.582 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.582 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.843 { 00:18:05.843 "cntlid": 113, 00:18:05.843 "qid": 0, 00:18:05.843 "state": "enabled", 00:18:05.843 "thread": "nvmf_tgt_poll_group_000", 00:18:05.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:05.843 "listen_address": { 00:18:05.843 "trtype": "TCP", 00:18:05.843 "adrfam": "IPv4", 00:18:05.843 "traddr": "10.0.0.2", 00:18:05.843 "trsvcid": "4420" 00:18:05.843 }, 00:18:05.843 "peer_address": { 00:18:05.843 "trtype": "TCP", 00:18:05.843 "adrfam": "IPv4", 00:18:05.843 "traddr": "10.0.0.1", 00:18:05.843 "trsvcid": "51098" 00:18:05.843 }, 00:18:05.843 "auth": { 00:18:05.843 "state": "completed", 00:18:05.843 "digest": "sha512", 00:18:05.843 "dhgroup": "ffdhe3072" 00:18:05.843 } 00:18:05.843 } 00:18:05.843 ]' 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.843 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:06.104 11:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:06.673 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.673 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.673 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.673 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.673 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.673 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.673 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.673 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.933 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.193 00:18:07.193 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.193 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.194 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.454 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.454 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.454 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.454 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.454 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.454 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.454 { 00:18:07.454 "cntlid": 115, 00:18:07.454 "qid": 0, 00:18:07.454 "state": "enabled", 00:18:07.454 "thread": "nvmf_tgt_poll_group_000", 00:18:07.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.454 "listen_address": { 00:18:07.454 "trtype": "TCP", 00:18:07.454 "adrfam": "IPv4", 00:18:07.454 "traddr": "10.0.0.2", 00:18:07.454 "trsvcid": "4420" 00:18:07.454 }, 00:18:07.454 "peer_address": { 00:18:07.454 "trtype": "TCP", 00:18:07.454 "adrfam": "IPv4", 00:18:07.454 "traddr": "10.0.0.1", 00:18:07.454 "trsvcid": "51114" 00:18:07.454 }, 00:18:07.454 "auth": { 00:18:07.454 "state": "completed", 00:18:07.454 "digest": "sha512", 00:18:07.454 "dhgroup": "ffdhe3072" 00:18:07.454 } 00:18:07.454 } 00:18:07.454 ]' 00:18:07.454 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.454 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.454 11:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.454 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.454 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.454 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.454 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.454 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.714 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:07.714 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:08.284 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.284 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.284 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.284 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.284 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.284 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.284 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:08.284 11:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.544 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.804 00:18:08.804 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.804 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.804 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.064 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.064 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.064 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.064 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.064 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.064 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.064 { 00:18:09.064 "cntlid": 117, 00:18:09.064 "qid": 0, 00:18:09.064 "state": "enabled", 00:18:09.064 "thread": "nvmf_tgt_poll_group_000", 00:18:09.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.064 "listen_address": { 00:18:09.064 "trtype": "TCP", 00:18:09.064 "adrfam": "IPv4", 00:18:09.064 "traddr": "10.0.0.2", 00:18:09.064 "trsvcid": "4420" 00:18:09.064 }, 00:18:09.064 "peer_address": { 00:18:09.064 "trtype": "TCP", 00:18:09.064 "adrfam": "IPv4", 00:18:09.064 "traddr": "10.0.0.1", 00:18:09.064 "trsvcid": "51146" 00:18:09.064 }, 00:18:09.064 "auth": { 00:18:09.064 "state": "completed", 00:18:09.064 "digest": "sha512", 00:18:09.064 "dhgroup": "ffdhe3072" 00:18:09.064 } 00:18:09.064 } 00:18:09.064 ]' 00:18:09.064 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.064 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.065 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.065 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.065 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.065 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.065 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.065 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.326 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:09.326 11:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:09.898 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.898 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.898 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.898 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.898 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.898 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.898 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.898 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.159 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.419 00:18:10.419 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.419 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.419 11:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.419 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.419 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.419 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.419 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.419 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.419 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.419 { 00:18:10.419 "cntlid": 119, 00:18:10.419 "qid": 0, 00:18:10.419 "state": "enabled", 00:18:10.419 "thread": "nvmf_tgt_poll_group_000", 00:18:10.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.419 "listen_address": { 00:18:10.419 "trtype": "TCP", 00:18:10.419 "adrfam": "IPv4", 00:18:10.419 "traddr": "10.0.0.2", 00:18:10.419 "trsvcid": "4420" 00:18:10.419 }, 00:18:10.419 "peer_address": { 00:18:10.419 "trtype": "TCP", 00:18:10.419 "adrfam": "IPv4", 00:18:10.419 "traddr": "10.0.0.1", 00:18:10.419 "trsvcid": "51172" 00:18:10.419 }, 00:18:10.419 "auth": { 00:18:10.419 "state": "completed", 00:18:10.419 "digest": "sha512", 00:18:10.419 "dhgroup": "ffdhe3072" 00:18:10.419 } 00:18:10.419 } 00:18:10.419 ]' 00:18:10.419 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.679 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.679 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.679 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.679 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.679 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.679 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.679 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.939 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:10.940 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:11.509 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.509 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.509 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.509 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.509 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.509 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.509 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.509 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.509 11:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.509 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.769 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.028 { 00:18:12.028 "cntlid": 121, 00:18:12.028 "qid": 0, 00:18:12.028 "state": "enabled", 00:18:12.028 "thread": "nvmf_tgt_poll_group_000", 00:18:12.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.028 "listen_address": { 00:18:12.028 "trtype": "TCP", 00:18:12.028 "adrfam": "IPv4", 00:18:12.028 "traddr": "10.0.0.2", 00:18:12.028 "trsvcid": "4420" 00:18:12.028 }, 00:18:12.028 "peer_address": { 00:18:12.028 "trtype": "TCP", 00:18:12.028 "adrfam": "IPv4", 00:18:12.028 "traddr": "10.0.0.1", 00:18:12.028 "trsvcid": "43238" 00:18:12.028 }, 00:18:12.028 "auth": { 00:18:12.028 "state": "completed", 00:18:12.028 "digest": "sha512", 00:18:12.028 "dhgroup": "ffdhe4096" 00:18:12.028 } 00:18:12.028 } 00:18:12.028 ]' 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.028 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.287 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.287 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.287 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.287 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.287 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.287 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:12.287 11:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.226 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.485 00:18:13.485 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.485 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.485 11:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.745 { 00:18:13.745 "cntlid": 123, 00:18:13.745 "qid": 0, 00:18:13.745 "state": "enabled", 00:18:13.745 "thread": "nvmf_tgt_poll_group_000", 00:18:13.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.745 "listen_address": { 00:18:13.745 "trtype": "TCP", 00:18:13.745 "adrfam": "IPv4", 00:18:13.745 "traddr": "10.0.0.2", 00:18:13.745 "trsvcid": "4420" 00:18:13.745 }, 00:18:13.745 "peer_address": { 00:18:13.745 "trtype": "TCP", 00:18:13.745 "adrfam": "IPv4", 00:18:13.745 "traddr": "10.0.0.1", 00:18:13.745 "trsvcid": "43258" 00:18:13.745 }, 00:18:13.745 "auth": { 00:18:13.745 "state": "completed", 00:18:13.745 "digest": "sha512", 00:18:13.745 "dhgroup": "ffdhe4096" 00:18:13.745 } 00:18:13.745 } 00:18:13.745 ]' 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.745 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.005 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:14.005 11:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:14.575 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.575 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.575 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.575 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.575 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.575 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.575 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.575 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.835 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.095 00:18:15.095 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.095 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.095 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.095 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.095 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.095 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.095 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.356 { 00:18:15.356 "cntlid": 125, 00:18:15.356 "qid": 0, 00:18:15.356 "state": "enabled", 00:18:15.356 "thread": "nvmf_tgt_poll_group_000", 00:18:15.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.356 "listen_address": { 00:18:15.356 "trtype": "TCP", 00:18:15.356 "adrfam": "IPv4", 00:18:15.356 "traddr": "10.0.0.2", 00:18:15.356 "trsvcid": "4420" 00:18:15.356 }, 00:18:15.356 "peer_address": { 00:18:15.356 "trtype": "TCP", 00:18:15.356 "adrfam": "IPv4", 00:18:15.356 "traddr": "10.0.0.1", 00:18:15.356 "trsvcid": "43286" 00:18:15.356 }, 00:18:15.356 "auth": { 00:18:15.356 "state": "completed", 00:18:15.356 "digest": "sha512", 00:18:15.356 "dhgroup": "ffdhe4096" 00:18:15.356 } 00:18:15.356 } 00:18:15.356 ]' 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.356 11:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.616 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:15.616 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:16.188 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.188 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.188 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.188 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.188 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.188 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.188 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.188 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.449 11:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.712 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.712 { 00:18:16.712 "cntlid": 127, 00:18:16.712 "qid": 0, 00:18:16.712 "state": "enabled", 00:18:16.712 "thread": "nvmf_tgt_poll_group_000", 00:18:16.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.712 "listen_address": { 00:18:16.712 "trtype": "TCP", 00:18:16.712 "adrfam": "IPv4", 00:18:16.712 "traddr": "10.0.0.2", 00:18:16.712 "trsvcid": "4420" 00:18:16.712 }, 00:18:16.712 "peer_address": { 00:18:16.712 "trtype": "TCP", 00:18:16.712 "adrfam": "IPv4", 00:18:16.712 "traddr": "10.0.0.1", 00:18:16.712 "trsvcid": "43300" 00:18:16.712 }, 00:18:16.712 "auth": { 00:18:16.712 "state": "completed", 00:18:16.712 "digest": "sha512", 00:18:16.712 "dhgroup": "ffdhe4096" 00:18:16.712 } 00:18:16.712 } 00:18:16.712 ]' 00:18:16.712 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.973 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.973 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.973 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.973 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.973 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.973 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.973 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.234 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:17.234 11:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.804 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.373 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.373 { 00:18:18.373 "cntlid": 129, 00:18:18.373 "qid": 0, 00:18:18.373 "state": "enabled", 00:18:18.373 "thread": "nvmf_tgt_poll_group_000", 00:18:18.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.373 "listen_address": { 00:18:18.373 "trtype": "TCP", 00:18:18.373 "adrfam": "IPv4", 00:18:18.373 "traddr": "10.0.0.2", 00:18:18.373 "trsvcid": "4420" 00:18:18.373 }, 00:18:18.373 "peer_address": { 00:18:18.373 "trtype": "TCP", 00:18:18.373 "adrfam": "IPv4", 00:18:18.373 "traddr": "10.0.0.1", 00:18:18.373 "trsvcid": "43328" 00:18:18.373 }, 00:18:18.373 "auth": { 00:18:18.373 "state": "completed", 00:18:18.373 "digest": "sha512", 00:18:18.373 "dhgroup": "ffdhe6144" 00:18:18.373 } 00:18:18.373 } 00:18:18.373 ]' 00:18:18.373 11:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.633 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.633 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.633 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.633 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.633 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.633 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.633 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.893 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:18.893 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:19.463 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.463 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.463 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.463 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.463 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.463 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.463 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.463 11:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.463 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.033 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.033 { 00:18:20.033 "cntlid": 131, 00:18:20.033 "qid": 0, 00:18:20.033 "state": "enabled", 00:18:20.033 "thread": "nvmf_tgt_poll_group_000", 00:18:20.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.033 "listen_address": { 00:18:20.033 "trtype": "TCP", 00:18:20.033 "adrfam": "IPv4", 00:18:20.033 "traddr": "10.0.0.2", 00:18:20.033 "trsvcid": "4420" 00:18:20.033 }, 00:18:20.033 "peer_address": { 00:18:20.033 "trtype": "TCP", 00:18:20.033 "adrfam": "IPv4", 00:18:20.033 "traddr": "10.0.0.1", 00:18:20.033 "trsvcid": "43350" 00:18:20.033 }, 00:18:20.033 "auth": { 00:18:20.033 "state": "completed", 00:18:20.033 "digest": "sha512", 00:18:20.033 "dhgroup": "ffdhe6144" 00:18:20.033 } 00:18:20.033 } 00:18:20.033 ]' 00:18:20.033 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.294 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.294 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.294 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.294 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.294 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.294 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.294 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.554 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:20.554 11:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.124 11:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.695 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.695 { 00:18:21.695 "cntlid": 133, 00:18:21.695 "qid": 0, 00:18:21.695 "state": "enabled", 00:18:21.695 "thread": "nvmf_tgt_poll_group_000", 00:18:21.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.695 "listen_address": { 00:18:21.695 "trtype": "TCP", 00:18:21.695 "adrfam": "IPv4", 00:18:21.695 "traddr": "10.0.0.2", 00:18:21.695 "trsvcid": "4420" 00:18:21.695 }, 00:18:21.695 "peer_address": { 00:18:21.695 "trtype": "TCP", 00:18:21.695 "adrfam": "IPv4", 00:18:21.695 "traddr": "10.0.0.1", 00:18:21.695 "trsvcid": "49554" 00:18:21.695 }, 00:18:21.695 "auth": { 00:18:21.695 "state": "completed", 00:18:21.695 "digest": "sha512", 00:18:21.695 "dhgroup": "ffdhe6144" 00:18:21.695 } 00:18:21.695 } 00:18:21.695 ]' 00:18:21.695 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.955 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.955 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.955 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.955 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.955 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.955 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.955 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.955 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:21.955 11:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:22.526 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.787 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.356 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.357 { 00:18:23.357 "cntlid": 135, 00:18:23.357 "qid": 0, 00:18:23.357 "state": "enabled", 00:18:23.357 "thread": "nvmf_tgt_poll_group_000", 00:18:23.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.357 "listen_address": { 00:18:23.357 "trtype": "TCP", 00:18:23.357 "adrfam": "IPv4", 00:18:23.357 "traddr": "10.0.0.2", 00:18:23.357 "trsvcid": "4420" 00:18:23.357 }, 00:18:23.357 "peer_address": { 00:18:23.357 "trtype": "TCP", 00:18:23.357 "adrfam": "IPv4", 00:18:23.357 "traddr": "10.0.0.1", 00:18:23.357 "trsvcid": "49586" 00:18:23.357 }, 00:18:23.357 "auth": { 00:18:23.357 "state": "completed", 00:18:23.357 "digest": "sha512", 00:18:23.357 "dhgroup": "ffdhe6144" 00:18:23.357 } 00:18:23.357 } 00:18:23.357 ]' 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.357 11:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.616 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.616 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.616 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.616 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.616 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.616 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:23.616 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:24.186 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.186 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.186 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.186 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.186 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.186 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.186 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.186 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.186 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.447 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:24.447 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.447 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.447 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.447 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.447 11:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.447 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.447 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.447 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.447 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.447 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.447 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.447 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.016 00:18:25.016 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.016 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.016 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.276 { 00:18:25.276 "cntlid": 137, 00:18:25.276 "qid": 0, 00:18:25.276 "state": "enabled", 00:18:25.276 "thread": "nvmf_tgt_poll_group_000", 00:18:25.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.276 "listen_address": { 00:18:25.276 "trtype": "TCP", 00:18:25.276 "adrfam": "IPv4", 00:18:25.276 "traddr": "10.0.0.2", 00:18:25.276 "trsvcid": "4420" 00:18:25.276 }, 00:18:25.276 "peer_address": { 00:18:25.276 "trtype": "TCP", 00:18:25.276 "adrfam": "IPv4", 00:18:25.276 "traddr": "10.0.0.1", 00:18:25.276 "trsvcid": "49614" 00:18:25.276 }, 00:18:25.276 "auth": { 00:18:25.276 "state": "completed", 00:18:25.276 "digest": "sha512", 00:18:25.276 "dhgroup": "ffdhe8192" 00:18:25.276 } 00:18:25.276 } 00:18:25.276 ]' 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.276 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.538 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:25.538 11:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:26.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.110 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.370 11:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.630 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.890 { 00:18:26.890 "cntlid": 139, 00:18:26.890 "qid": 0, 00:18:26.890 "state": "enabled", 00:18:26.890 "thread": "nvmf_tgt_poll_group_000", 00:18:26.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.890 "listen_address": { 00:18:26.890 "trtype": "TCP", 00:18:26.890 "adrfam": "IPv4", 00:18:26.890 "traddr": "10.0.0.2", 00:18:26.890 "trsvcid": "4420" 00:18:26.890 }, 00:18:26.890 "peer_address": { 00:18:26.890 "trtype": "TCP", 00:18:26.890 "adrfam": "IPv4", 00:18:26.890 "traddr": "10.0.0.1", 00:18:26.890 "trsvcid": "49650" 00:18:26.890 }, 00:18:26.890 "auth": { 00:18:26.890 "state": "completed", 00:18:26.890 "digest": "sha512", 00:18:26.890 "dhgroup": "ffdhe8192" 00:18:26.890 } 00:18:26.890 } 00:18:26.890 ]' 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.890 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.891 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.152 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.152 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.152 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.152 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.152 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.412 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:27.412 11:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: --dhchap-ctrl-secret DHHC-1:02:NDQ3MWE1NDJkNjAwNmY0NTdjZDg3OWM3NWYwMzBiZmUwYTVjNDI1NGM5YjE5OGQ451+45A==: 00:18:27.982 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.982 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.982 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.982 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.982 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.983 11:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.553 00:18:28.553 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.553 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.553 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.813 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.813 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.813 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.813 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.814 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.814 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.814 { 00:18:28.814 "cntlid": 141, 00:18:28.814 "qid": 0, 00:18:28.814 "state": "enabled", 00:18:28.814 "thread": "nvmf_tgt_poll_group_000", 00:18:28.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.814 "listen_address": { 00:18:28.814 "trtype": "TCP", 00:18:28.814 "adrfam": "IPv4", 00:18:28.814 "traddr": "10.0.0.2", 00:18:28.814 "trsvcid": "4420" 00:18:28.814 }, 00:18:28.814 "peer_address": { 00:18:28.814 "trtype": "TCP", 00:18:28.814 "adrfam": "IPv4", 00:18:28.814 "traddr": "10.0.0.1", 00:18:28.814 "trsvcid": "49692" 00:18:28.814 }, 00:18:28.814 "auth": { 00:18:28.814 "state": "completed", 00:18:28.814 "digest": "sha512", 00:18:28.814 "dhgroup": "ffdhe8192" 00:18:28.814 } 00:18:28.814 } 00:18:28.814 ]' 00:18:28.814 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.814 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.814 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.814 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.814 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.074 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.074 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.074 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.074 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:29.074 11:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:01:YzQ2N2QwZTViMTJjYzU3MDIxMDIxYTU0MDgwMzlhNDHJOCgo: 00:18:29.645 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.645 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.645 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.645 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.645 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.645 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.645 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.645 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.906 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.542 00:18:30.542 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.542 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.542 11:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.542 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.542 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.857 { 00:18:30.857 "cntlid": 143, 00:18:30.857 "qid": 0, 00:18:30.857 "state": "enabled", 00:18:30.857 "thread": "nvmf_tgt_poll_group_000", 00:18:30.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.857 "listen_address": { 00:18:30.857 "trtype": "TCP", 00:18:30.857 "adrfam": "IPv4", 00:18:30.857 "traddr": "10.0.0.2", 00:18:30.857 "trsvcid": "4420" 00:18:30.857 }, 00:18:30.857 "peer_address": { 00:18:30.857 "trtype": "TCP", 00:18:30.857 "adrfam": "IPv4", 00:18:30.857 "traddr": "10.0.0.1", 00:18:30.857 "trsvcid": "49714" 00:18:30.857 }, 00:18:30.857 "auth": { 00:18:30.857 "state": "completed", 00:18:30.857 "digest": "sha512", 00:18:30.857 "dhgroup": "ffdhe8192" 00:18:30.857 } 00:18:30.857 } 00:18:30.857 ]' 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.857 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.144 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:31.144 11:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.725 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.296 00:18:32.296 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.296 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.296 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.556 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.556 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.556 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.556 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.556 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.556 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.556 { 00:18:32.556 "cntlid": 145, 00:18:32.556 "qid": 0, 00:18:32.556 "state": "enabled", 00:18:32.556 "thread": "nvmf_tgt_poll_group_000", 00:18:32.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.556 "listen_address": { 00:18:32.556 "trtype": "TCP", 00:18:32.556 "adrfam": "IPv4", 00:18:32.556 "traddr": "10.0.0.2", 00:18:32.556 "trsvcid": "4420" 00:18:32.556 }, 00:18:32.556 "peer_address": { 00:18:32.556 "trtype": "TCP", 00:18:32.556 "adrfam": "IPv4", 00:18:32.556 "traddr": "10.0.0.1", 00:18:32.556 "trsvcid": "59242" 00:18:32.556 }, 00:18:32.556 "auth": { 00:18:32.556 "state": "completed", 00:18:32.556 "digest": "sha512", 00:18:32.556 "dhgroup": "ffdhe8192" 00:18:32.556 } 00:18:32.556 } 00:18:32.556 ]' 00:18:32.556 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.556 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.556 11:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.556 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.556 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.556 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.556 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.556 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.815 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:32.815 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:M2RhNWM3M2I4MTU5ZThkZjc1MTNmNDIyODc1MTdiMjlhNTAxMjQ4NzU1OGJmMGExjGAmHQ==: --dhchap-ctrl-secret DHHC-1:03:MjEwMDMyNTI2MTlmMGIyNjE5ZWU1NTY3Y2VhYjkzMTQzYzQ1NjRkZmI3ZjIxYzc2YjY2YmI5NmNiODE0MWNhYic7H1c=: 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:33.385 11:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:33.956 request: 00:18:33.956 { 00:18:33.956 "name": "nvme0", 00:18:33.956 "trtype": "tcp", 00:18:33.956 "traddr": "10.0.0.2", 00:18:33.956 "adrfam": "ipv4", 00:18:33.956 "trsvcid": "4420", 00:18:33.956 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.956 "prchk_reftag": false, 00:18:33.956 "prchk_guard": false, 00:18:33.956 "hdgst": false, 00:18:33.956 "ddgst": false, 00:18:33.956 "dhchap_key": "key2", 00:18:33.956 "allow_unrecognized_csi": false, 00:18:33.956 "method": "bdev_nvme_attach_controller", 00:18:33.956 "req_id": 1 00:18:33.956 } 00:18:33.956 Got JSON-RPC error response 00:18:33.956 response: 00:18:33.956 { 00:18:33.956 "code": -5, 00:18:33.956 "message": "Input/output error" 00:18:33.956 } 00:18:33.956 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:33.956 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.956 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.956 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.956 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.956 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.956 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.956 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.956 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:33.957 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:34.218 request: 00:18:34.218 { 00:18:34.218 "name": "nvme0", 00:18:34.218 "trtype": "tcp", 00:18:34.218 "traddr": "10.0.0.2", 00:18:34.218 "adrfam": "ipv4", 00:18:34.218 "trsvcid": "4420", 00:18:34.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.218 "prchk_reftag": false, 00:18:34.218 "prchk_guard": false, 00:18:34.218 "hdgst": false, 00:18:34.218 "ddgst": false, 00:18:34.218 "dhchap_key": "key1", 00:18:34.218 "dhchap_ctrlr_key": "ckey2", 00:18:34.218 "allow_unrecognized_csi": false, 00:18:34.218 "method": "bdev_nvme_attach_controller", 00:18:34.218 "req_id": 1 00:18:34.218 } 00:18:34.218 Got JSON-RPC error response 00:18:34.218 response: 00:18:34.218 { 00:18:34.218 "code": -5, 00:18:34.218 "message": "Input/output error" 00:18:34.218 } 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.218 11:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.789 request: 00:18:34.789 { 00:18:34.789 "name": "nvme0", 00:18:34.789 "trtype": "tcp", 00:18:34.789 "traddr": "10.0.0.2", 00:18:34.789 "adrfam": "ipv4", 00:18:34.789 "trsvcid": "4420", 00:18:34.789 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:34.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.789 "prchk_reftag": false, 00:18:34.789 "prchk_guard": false, 00:18:34.789 "hdgst": false, 00:18:34.789 "ddgst": false, 00:18:34.789 "dhchap_key": "key1", 00:18:34.789 "dhchap_ctrlr_key": "ckey1", 00:18:34.789 "allow_unrecognized_csi": false, 00:18:34.789 "method": "bdev_nvme_attach_controller", 00:18:34.789 "req_id": 1 00:18:34.789 } 00:18:34.789 Got JSON-RPC error response 00:18:34.789 response: 00:18:34.789 { 00:18:34.789 "code": -5, 00:18:34.789 "message": "Input/output error" 00:18:34.789 } 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 991946 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 991946 ']' 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 991946 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 991946 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 991946' 00:18:34.789 killing process with pid 991946 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 991946 00:18:34.789 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 991946 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1017597 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1017597 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1017597 ']' 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.050 11:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1017597 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1017597 ']' 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.992 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.992 null0 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.MFd 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.wlI ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wlI 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.i8I 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.sPb ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sPb 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Nkg 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.rd4 ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rd4 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jhn 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.254 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:36.255 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.255 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.255 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.255 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.255 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.255 11:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.825 nvme0n1 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.085 { 00:18:37.085 "cntlid": 1, 00:18:37.085 "qid": 0, 00:18:37.085 "state": "enabled", 00:18:37.085 "thread": "nvmf_tgt_poll_group_000", 00:18:37.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.085 "listen_address": { 00:18:37.085 "trtype": "TCP", 00:18:37.085 "adrfam": "IPv4", 00:18:37.085 "traddr": "10.0.0.2", 00:18:37.085 "trsvcid": "4420" 00:18:37.085 }, 00:18:37.085 "peer_address": { 00:18:37.085 "trtype": "TCP", 00:18:37.085 "adrfam": "IPv4", 00:18:37.085 "traddr": "10.0.0.1", 00:18:37.085 "trsvcid": "59282" 00:18:37.085 }, 00:18:37.085 "auth": { 00:18:37.085 "state": "completed", 00:18:37.085 "digest": "sha512", 00:18:37.085 "dhgroup": "ffdhe8192" 00:18:37.085 } 00:18:37.085 } 00:18:37.085 ]' 00:18:37.085 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.345 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.345 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.345 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.345 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.345 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.345 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.345 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.605 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:37.605 11:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:38.175 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.175 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.175 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.175 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.175 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.176 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:38.176 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.176 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.176 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.176 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:38.176 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.437 11:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.437 request: 00:18:38.437 { 00:18:38.437 "name": "nvme0", 00:18:38.437 "trtype": "tcp", 00:18:38.437 "traddr": "10.0.0.2", 00:18:38.437 "adrfam": "ipv4", 00:18:38.437 "trsvcid": "4420", 00:18:38.437 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.437 "prchk_reftag": false, 00:18:38.437 "prchk_guard": false, 00:18:38.437 "hdgst": false, 00:18:38.437 "ddgst": false, 00:18:38.437 "dhchap_key": "key3", 00:18:38.437 "allow_unrecognized_csi": false, 00:18:38.437 "method": "bdev_nvme_attach_controller", 00:18:38.437 "req_id": 1 00:18:38.437 } 00:18:38.437 Got JSON-RPC error response 00:18:38.437 response: 00:18:38.437 { 00:18:38.437 "code": -5, 00:18:38.437 "message": "Input/output error" 00:18:38.437 } 00:18:38.437 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:38.437 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.437 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.437 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.437 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:38.437 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:38.437 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:38.437 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.697 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.957 request: 00:18:38.957 { 00:18:38.957 "name": "nvme0", 00:18:38.957 "trtype": "tcp", 00:18:38.957 "traddr": "10.0.0.2", 00:18:38.957 "adrfam": "ipv4", 00:18:38.957 "trsvcid": "4420", 00:18:38.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.957 "prchk_reftag": false, 00:18:38.957 "prchk_guard": false, 00:18:38.957 "hdgst": false, 00:18:38.957 "ddgst": false, 00:18:38.957 "dhchap_key": "key3", 00:18:38.957 "allow_unrecognized_csi": false, 00:18:38.957 "method": "bdev_nvme_attach_controller", 00:18:38.957 "req_id": 1 00:18:38.957 } 00:18:38.957 Got JSON-RPC error response 00:18:38.957 response: 00:18:38.957 { 00:18:38.957 "code": -5, 00:18:38.957 "message": "Input/output error" 00:18:38.957 } 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.957 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:39.217 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:39.477 request: 00:18:39.477 { 00:18:39.477 "name": "nvme0", 00:18:39.477 "trtype": "tcp", 00:18:39.477 "traddr": "10.0.0.2", 00:18:39.477 "adrfam": "ipv4", 00:18:39.477 "trsvcid": "4420", 00:18:39.477 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.477 "prchk_reftag": false, 00:18:39.477 "prchk_guard": false, 00:18:39.477 "hdgst": false, 00:18:39.477 "ddgst": false, 00:18:39.477 "dhchap_key": "key0", 00:18:39.477 "dhchap_ctrlr_key": "key1", 00:18:39.477 "allow_unrecognized_csi": false, 00:18:39.477 "method": "bdev_nvme_attach_controller", 00:18:39.477 "req_id": 1 00:18:39.477 } 00:18:39.477 Got JSON-RPC error response 00:18:39.477 response: 00:18:39.477 { 00:18:39.477 "code": -5, 00:18:39.477 "message": "Input/output error" 00:18:39.477 } 00:18:39.477 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:39.477 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.477 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.477 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.477 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:39.477 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:39.477 11:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:39.737 nvme0n1 00:18:39.737 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:39.737 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:39.737 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:39.997 11:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:40.937 nvme0n1 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:40.937 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.198 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.198 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:41.198 11:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: --dhchap-ctrl-secret DHHC-1:03:YTRkZGJiZjMzMzY1NjM2Njk3NmM0ZDQzZTlhMmJhMTAxZDkyMGVjYWJhZGE1MTZhYTQzYTVhNzIzMjAxNWQ4MdJyGow=: 00:18:41.767 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:41.767 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:41.767 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:41.767 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:41.767 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:41.767 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:41.767 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:41.767 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.767 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:42.027 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:42.288 request: 00:18:42.288 { 00:18:42.288 "name": "nvme0", 00:18:42.288 "trtype": "tcp", 00:18:42.288 "traddr": "10.0.0.2", 00:18:42.288 "adrfam": "ipv4", 00:18:42.288 "trsvcid": "4420", 00:18:42.288 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.288 "prchk_reftag": false, 00:18:42.288 "prchk_guard": false, 00:18:42.288 "hdgst": false, 00:18:42.288 "ddgst": false, 00:18:42.288 "dhchap_key": "key1", 00:18:42.288 "allow_unrecognized_csi": false, 00:18:42.288 "method": "bdev_nvme_attach_controller", 00:18:42.288 "req_id": 1 00:18:42.288 } 00:18:42.288 Got JSON-RPC error response 00:18:42.288 response: 00:18:42.288 { 00:18:42.288 "code": -5, 00:18:42.288 "message": "Input/output error" 00:18:42.288 } 00:18:42.288 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:42.288 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:42.288 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:42.288 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:42.288 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.288 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.288 11:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:43.229 nvme0n1 00:18:43.229 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:43.229 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:43.229 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.229 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.229 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.229 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.490 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.490 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.490 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.490 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.490 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:43.490 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:43.490 11:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:43.750 nvme0n1 00:18:43.750 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:43.750 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:43.750 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: '' 2s 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: ]] 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDRhMDgzNDA5NDg1ZmU5ODg3NzUxM2FmMjk0N2Q3NTd21sn7: 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:44.010 11:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: 2s 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: ]] 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzEyZTgzZDFjNTdjNjZhMmZhMzI1YzllZjRlYjY2MTIyYzdlZDUzOGY4YWJhN2Q4kxpZew==: 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:46.553 11:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:48.465 11:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:49.035 nvme0n1 00:18:49.035 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:49.035 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.035 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.035 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.035 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:49.035 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:49.605 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:49.605 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:49.605 11:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.605 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.605 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.605 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.605 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.605 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.605 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:49.605 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:49.866 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:50.436 request: 00:18:50.436 { 00:18:50.436 "name": "nvme0", 00:18:50.436 "dhchap_key": "key1", 00:18:50.436 "dhchap_ctrlr_key": "key3", 00:18:50.436 "method": "bdev_nvme_set_keys", 00:18:50.436 "req_id": 1 00:18:50.436 } 00:18:50.436 Got JSON-RPC error response 00:18:50.436 response: 00:18:50.436 { 00:18:50.436 "code": -13, 00:18:50.436 "message": "Permission denied" 00:18:50.436 } 00:18:50.436 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:50.436 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:50.436 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:50.436 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:50.436 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:50.436 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:50.436 11:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.696 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:50.696 11:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:51.635 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:51.635 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:51.635 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.895 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:51.895 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:51.895 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.895 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.895 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.895 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.895 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.895 11:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:52.464 nvme0n1 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:52.464 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:53.036 request: 00:18:53.036 { 00:18:53.036 "name": "nvme0", 00:18:53.036 "dhchap_key": "key2", 00:18:53.036 "dhchap_ctrlr_key": "key0", 00:18:53.036 "method": "bdev_nvme_set_keys", 00:18:53.036 "req_id": 1 00:18:53.036 } 00:18:53.036 Got JSON-RPC error response 00:18:53.036 response: 00:18:53.036 { 00:18:53.036 "code": -13, 00:18:53.036 "message": "Permission denied" 00:18:53.036 } 00:18:53.036 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:53.036 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:53.036 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:53.036 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:53.036 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:53.036 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:53.036 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.297 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:53.297 11:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:54.237 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:54.237 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:54.237 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.237 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:54.237 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:54.237 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:54.237 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 992293 00:18:54.237 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 992293 ']' 00:18:54.237 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 992293 00:18:54.497 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:54.497 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.497 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 992293 00:18:54.497 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:54.497 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:54.497 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 992293' 00:18:54.497 killing process with pid 992293 00:18:54.497 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 992293 00:18:54.497 11:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 992293 00:18:54.497 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:54.497 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:54.497 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:54.497 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:54.497 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:54.497 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.497 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:54.758 rmmod nvme_tcp 00:18:54.758 rmmod nvme_fabrics 00:18:54.758 rmmod nvme_keyring 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1017597 ']' 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1017597 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1017597 ']' 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1017597 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1017597 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1017597' 00:18:54.758 killing process with pid 1017597 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1017597 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1017597 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.758 11:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.MFd /tmp/spdk.key-sha256.i8I /tmp/spdk.key-sha384.Nkg /tmp/spdk.key-sha512.jhn /tmp/spdk.key-sha512.wlI /tmp/spdk.key-sha384.sPb /tmp/spdk.key-sha256.rd4 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:57.302 00:18:57.302 real 2m32.263s 00:18:57.302 user 5m43.624s 00:18:57.302 sys 0m21.674s 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.302 ************************************ 00:18:57.302 END TEST nvmf_auth_target 00:18:57.302 ************************************ 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:57.302 ************************************ 00:18:57.302 START TEST nvmf_bdevio_no_huge 00:18:57.302 ************************************ 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:57.302 * Looking for test storage... 00:18:57.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.302 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:57.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.303 --rc genhtml_branch_coverage=1 00:18:57.303 --rc genhtml_function_coverage=1 00:18:57.303 --rc genhtml_legend=1 00:18:57.303 --rc geninfo_all_blocks=1 00:18:57.303 --rc geninfo_unexecuted_blocks=1 00:18:57.303 00:18:57.303 ' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:57.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.303 --rc genhtml_branch_coverage=1 00:18:57.303 --rc genhtml_function_coverage=1 00:18:57.303 --rc genhtml_legend=1 00:18:57.303 --rc geninfo_all_blocks=1 00:18:57.303 --rc geninfo_unexecuted_blocks=1 00:18:57.303 00:18:57.303 ' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:57.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.303 --rc genhtml_branch_coverage=1 00:18:57.303 --rc genhtml_function_coverage=1 00:18:57.303 --rc genhtml_legend=1 00:18:57.303 --rc geninfo_all_blocks=1 00:18:57.303 --rc geninfo_unexecuted_blocks=1 00:18:57.303 00:18:57.303 ' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:57.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.303 --rc genhtml_branch_coverage=1 00:18:57.303 --rc genhtml_function_coverage=1 00:18:57.303 --rc genhtml_legend=1 00:18:57.303 --rc geninfo_all_blocks=1 00:18:57.303 --rc geninfo_unexecuted_blocks=1 00:18:57.303 00:18:57.303 ' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:57.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:57.303 11:52:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:05.452 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.452 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:05.452 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:05.453 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:05.453 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.453 11:52:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:05.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:19:05.453 00:19:05.453 --- 10.0.0.2 ping statistics --- 00:19:05.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.453 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:19:05.453 00:19:05.453 --- 10.0.0.1 ping statistics --- 00:19:05.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.453 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1025756 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1025756 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1025756 ']' 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.453 11:52:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.453 [2024-10-11 11:52:49.280293] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:05.453 [2024-10-11 11:52:49.280358] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:05.453 [2024-10-11 11:52:49.375851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:05.453 [2024-10-11 11:52:49.436478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.453 [2024-10-11 11:52:49.436526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.453 [2024-10-11 11:52:49.436535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.453 [2024-10-11 11:52:49.436542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.453 [2024-10-11 11:52:49.436549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.453 [2024-10-11 11:52:49.438043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:05.453 [2024-10-11 11:52:49.438204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:05.453 [2024-10-11 11:52:49.438402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.453 [2024-10-11 11:52:49.438403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.714 [2024-10-11 11:52:50.152903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.714 Malloc0 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:05.714 [2024-10-11 11:52:50.207570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:05.714 { 00:19:05.714 "params": { 00:19:05.714 "name": "Nvme$subsystem", 00:19:05.714 "trtype": "$TEST_TRANSPORT", 00:19:05.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.714 "adrfam": "ipv4", 00:19:05.714 "trsvcid": "$NVMF_PORT", 00:19:05.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.714 "hdgst": ${hdgst:-false}, 00:19:05.714 "ddgst": ${ddgst:-false} 00:19:05.714 }, 00:19:05.714 "method": "bdev_nvme_attach_controller" 00:19:05.714 } 00:19:05.714 EOF 00:19:05.714 )") 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:19:05.714 11:52:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:05.714 "params": { 00:19:05.714 "name": "Nvme1", 00:19:05.714 "trtype": "tcp", 00:19:05.714 "traddr": "10.0.0.2", 00:19:05.714 "adrfam": "ipv4", 00:19:05.714 "trsvcid": "4420", 00:19:05.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.714 "hdgst": false, 00:19:05.714 "ddgst": false 00:19:05.714 }, 00:19:05.714 "method": "bdev_nvme_attach_controller" 00:19:05.714 }' 00:19:05.714 [2024-10-11 11:52:50.264935] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:05.714 [2024-10-11 11:52:50.265006] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1025960 ] 00:19:05.975 [2024-10-11 11:52:50.350471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:05.975 [2024-10-11 11:52:50.411645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.975 [2024-10-11 11:52:50.411808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.975 [2024-10-11 11:52:50.411985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.975 I/O targets: 00:19:05.975 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:05.975 00:19:05.975 00:19:05.975 CUnit - A unit testing framework for C - Version 2.1-3 00:19:05.975 http://cunit.sourceforge.net/ 00:19:05.975 00:19:05.975 00:19:05.975 Suite: bdevio tests on: Nvme1n1 00:19:06.236 Test: blockdev write read block ...passed 00:19:06.236 Test: blockdev write zeroes read block ...passed 00:19:06.236 Test: blockdev write zeroes read no split ...passed 00:19:06.236 Test: blockdev write zeroes read split ...passed 00:19:06.236 Test: blockdev write zeroes read split partial ...passed 00:19:06.236 Test: blockdev reset ...[2024-10-11 11:52:50.775148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:06.236 [2024-10-11 11:52:50.775250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x736d00 (9): Bad file descriptor 00:19:06.236 [2024-10-11 11:52:50.786815] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:06.236 passed 00:19:06.236 Test: blockdev write read 8 blocks ...passed 00:19:06.236 Test: blockdev write read size > 128k ...passed 00:19:06.236 Test: blockdev write read invalid size ...passed 00:19:06.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:06.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:06.497 Test: blockdev write read max offset ...passed 00:19:06.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:06.497 Test: blockdev writev readv 8 blocks ...passed 00:19:06.497 Test: blockdev writev readv 30 x 1block ...passed 00:19:06.497 Test: blockdev writev readv block ...passed 00:19:06.497 Test: blockdev writev readv size > 128k ...passed 00:19:06.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:06.497 Test: blockdev comparev and writev ...[2024-10-11 11:52:51.046446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:06.497 [2024-10-11 11:52:51.046496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.497 [2024-10-11 11:52:51.046512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:06.497 [2024-10-11 11:52:51.046521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.497 [2024-10-11 11:52:51.046952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:06.497 [2024-10-11 11:52:51.046965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.497 [2024-10-11 11:52:51.046979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:06.497 [2024-10-11 11:52:51.046988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.497 [2024-10-11 11:52:51.047418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:06.497 [2024-10-11 11:52:51.047430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.497 [2024-10-11 11:52:51.047444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:06.497 [2024-10-11 11:52:51.047452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.497 [2024-10-11 11:52:51.047887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:06.497 [2024-10-11 11:52:51.047899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.497 [2024-10-11 11:52:51.047913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:06.497 [2024-10-11 11:52:51.047920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.497 passed 00:19:06.758 Test: blockdev nvme passthru rw ...passed 00:19:06.758 Test: blockdev nvme passthru vendor specific ...[2024-10-11 11:52:51.131212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:06.758 [2024-10-11 11:52:51.131227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.758 [2024-10-11 11:52:51.131469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:06.758 [2024-10-11 11:52:51.131479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.758 [2024-10-11 11:52:51.131747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:06.758 [2024-10-11 11:52:51.131758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.758 [2024-10-11 11:52:51.132006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:06.758 [2024-10-11 11:52:51.132017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.758 passed 00:19:06.758 Test: blockdev nvme admin passthru ...passed 00:19:06.758 Test: blockdev copy ...passed 00:19:06.758 00:19:06.758 Run Summary: Type Total Ran Passed Failed Inactive 00:19:06.758 suites 1 1 n/a 0 0 00:19:06.758 tests 23 23 23 0 0 00:19:06.758 asserts 152 152 152 0 n/a 00:19:06.758 00:19:06.758 Elapsed time = 1.188 seconds 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.018 rmmod nvme_tcp 00:19:07.018 rmmod nvme_fabrics 00:19:07.018 rmmod nvme_keyring 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1025756 ']' 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1025756 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1025756 ']' 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1025756 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1025756 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1025756' 00:19:07.018 killing process with pid 1025756 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1025756 00:19:07.018 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1025756 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.278 11:52:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.824 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:09.824 00:19:09.824 real 0m12.345s 00:19:09.824 user 0m13.499s 00:19:09.824 sys 0m6.647s 00:19:09.824 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:09.824 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:09.824 ************************************ 00:19:09.824 END TEST nvmf_bdevio_no_huge 00:19:09.824 ************************************ 00:19:09.824 11:52:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:09.824 11:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:09.824 11:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:09.824 11:52:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:09.824 ************************************ 00:19:09.824 START TEST nvmf_tls 00:19:09.824 ************************************ 00:19:09.824 11:52:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:09.824 * Looking for test storage... 00:19:09.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.824 --rc genhtml_branch_coverage=1 00:19:09.824 --rc genhtml_function_coverage=1 00:19:09.824 --rc genhtml_legend=1 00:19:09.824 --rc geninfo_all_blocks=1 00:19:09.824 --rc geninfo_unexecuted_blocks=1 00:19:09.824 00:19:09.824 ' 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.824 --rc genhtml_branch_coverage=1 00:19:09.824 --rc genhtml_function_coverage=1 00:19:09.824 --rc genhtml_legend=1 00:19:09.824 --rc geninfo_all_blocks=1 00:19:09.824 --rc geninfo_unexecuted_blocks=1 00:19:09.824 00:19:09.824 ' 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.824 --rc genhtml_branch_coverage=1 00:19:09.824 --rc genhtml_function_coverage=1 00:19:09.824 --rc genhtml_legend=1 00:19:09.824 --rc geninfo_all_blocks=1 00:19:09.824 --rc geninfo_unexecuted_blocks=1 00:19:09.824 00:19:09.824 ' 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:09.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.824 --rc genhtml_branch_coverage=1 00:19:09.824 --rc genhtml_function_coverage=1 00:19:09.824 --rc genhtml_legend=1 00:19:09.824 --rc geninfo_all_blocks=1 00:19:09.824 --rc geninfo_unexecuted_blocks=1 00:19:09.824 00:19:09.824 ' 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.824 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:09.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:09.825 11:52:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:17.967 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:17.967 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:17.967 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:17.967 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:17.967 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:17.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:19:17.968 00:19:17.968 --- 10.0.0.2 ping statistics --- 00:19:17.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.968 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:19:17.968 00:19:17.968 --- 10.0.0.1 ping statistics --- 00:19:17.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.968 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1030457 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1030457 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1030457 ']' 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:17.968 11:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.968 [2024-10-11 11:53:01.758721] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:17.968 [2024-10-11 11:53:01.758785] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.968 [2024-10-11 11:53:01.848966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.968 [2024-10-11 11:53:01.899287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.968 [2024-10-11 11:53:01.899340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.968 [2024-10-11 11:53:01.899349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.968 [2024-10-11 11:53:01.899356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.968 [2024-10-11 11:53:01.899362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.968 [2024-10-11 11:53:01.900170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.968 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.968 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:17.968 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:17.968 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:17.968 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.229 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.229 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:18.229 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:18.229 true 00:19:18.229 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.229 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:18.490 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:18.490 11:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:18.490 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:18.750 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:18.750 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:18.750 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:18.750 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:18.750 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:19.010 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:19.010 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:19.271 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:19.271 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:19.271 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:19.271 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:19.533 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:19.533 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:19.533 11:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:19.533 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:19.533 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:19.794 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:19.794 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:19.794 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:20.056 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Q2VgrEBoxQ 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.W2DoCNNWN5 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Q2VgrEBoxQ 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.W2DoCNNWN5 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:20.316 11:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:20.576 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Q2VgrEBoxQ 00:19:20.576 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Q2VgrEBoxQ 00:19:20.576 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.836 [2024-10-11 11:53:05.329796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.836 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:21.096 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:21.096 [2024-10-11 11:53:05.666624] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.096 [2024-10-11 11:53:05.666843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.096 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:21.357 malloc0 00:19:21.357 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:21.617 11:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Q2VgrEBoxQ 00:19:21.617 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:21.878 11:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Q2VgrEBoxQ 00:19:31.874 Initializing NVMe Controllers 00:19:31.874 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:31.874 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:31.874 Initialization complete. Launching workers. 00:19:31.874 ======================================================== 00:19:31.874 Latency(us) 00:19:31.874 Device Information : IOPS MiB/s Average min max 00:19:31.874 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18640.47 72.81 3433.61 1224.44 4280.05 00:19:31.874 ======================================================== 00:19:31.874 Total : 18640.47 72.81 3433.61 1224.44 4280.05 00:19:31.874 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q2VgrEBoxQ 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q2VgrEBoxQ 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1033243 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1033243 /var/tmp/bdevperf.sock 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1033243 ']' 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.874 11:53:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.874 [2024-10-11 11:53:16.486600] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:31.874 [2024-10-11 11:53:16.486658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033243 ] 00:19:32.135 [2024-10-11 11:53:16.561909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.135 [2024-10-11 11:53:16.597362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.705 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.705 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.705 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q2VgrEBoxQ 00:19:32.965 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.225 [2024-10-11 11:53:17.608554] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.225 TLSTESTn1 00:19:33.225 11:53:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:33.225 Running I/O for 10 seconds... 00:19:35.177 5867.00 IOPS, 22.92 MiB/s [2024-10-11T09:53:21.192Z] 5521.00 IOPS, 21.57 MiB/s [2024-10-11T09:53:22.134Z] 5532.33 IOPS, 21.61 MiB/s [2024-10-11T09:53:23.076Z] 5687.50 IOPS, 22.22 MiB/s [2024-10-11T09:53:24.018Z] 5808.80 IOPS, 22.69 MiB/s [2024-10-11T09:53:24.959Z] 5890.33 IOPS, 23.01 MiB/s [2024-10-11T09:53:25.902Z] 5994.14 IOPS, 23.41 MiB/s [2024-10-11T09:53:26.841Z] 5995.00 IOPS, 23.42 MiB/s [2024-10-11T09:53:28.226Z] 6013.33 IOPS, 23.49 MiB/s [2024-10-11T09:53:28.226Z] 6076.00 IOPS, 23.73 MiB/s 00:19:43.594 Latency(us) 00:19:43.594 [2024-10-11T09:53:28.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.594 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:43.594 Verification LBA range: start 0x0 length 0x2000 00:19:43.594 TLSTESTn1 : 10.02 6079.02 23.75 0.00 0.00 21021.89 4614.83 43035.31 00:19:43.594 [2024-10-11T09:53:28.226Z] =================================================================================================================== 00:19:43.594 [2024-10-11T09:53:28.226Z] Total : 6079.02 23.75 0.00 0.00 21021.89 4614.83 43035.31 00:19:43.594 { 00:19:43.594 "results": [ 00:19:43.594 { 00:19:43.594 "job": "TLSTESTn1", 00:19:43.594 "core_mask": "0x4", 00:19:43.594 "workload": "verify", 00:19:43.594 "status": "finished", 00:19:43.594 "verify_range": { 00:19:43.594 "start": 0, 00:19:43.594 "length": 8192 00:19:43.594 }, 00:19:43.594 "queue_depth": 128, 00:19:43.594 "io_size": 4096, 00:19:43.594 "runtime": 10.015929, 00:19:43.594 "iops": 6079.016734244023, 00:19:43.594 "mibps": 23.746159118140714, 00:19:43.594 "io_failed": 0, 00:19:43.594 "io_timeout": 0, 00:19:43.594 "avg_latency_us": 21021.892944854128, 00:19:43.594 "min_latency_us": 4614.826666666667, 00:19:43.594 "max_latency_us": 43035.306666666664 00:19:43.594 } 00:19:43.594 ], 00:19:43.594 "core_count": 1 00:19:43.594 } 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1033243 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1033243 ']' 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1033243 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1033243 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1033243' 00:19:43.594 killing process with pid 1033243 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1033243 00:19:43.594 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.594 00:19:43.594 Latency(us) 00:19:43.594 [2024-10-11T09:53:28.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.594 [2024-10-11T09:53:28.226Z] =================================================================================================================== 00:19:43.594 [2024-10-11T09:53:28.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.594 11:53:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1033243 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W2DoCNNWN5 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W2DoCNNWN5 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.W2DoCNNWN5 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.W2DoCNNWN5 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1035534 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1035534 /var/tmp/bdevperf.sock 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1035534 ']' 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.594 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.594 [2024-10-11 11:53:28.079298] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:43.594 [2024-10-11 11:53:28.079353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035534 ] 00:19:43.594 [2024-10-11 11:53:28.155768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.595 [2024-10-11 11:53:28.183629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.855 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.855 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.855 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.W2DoCNNWN5 00:19:43.855 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:44.115 [2024-10-11 11:53:28.592610] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.115 [2024-10-11 11:53:28.601342] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:44.115 [2024-10-11 11:53:28.601809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e40b0 (107): Transport endpoint is not connected 00:19:44.115 [2024-10-11 11:53:28.602804] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e40b0 (9): Bad file descriptor 00:19:44.115 [2024-10-11 11:53:28.603806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:44.115 [2024-10-11 11:53:28.603813] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:44.115 [2024-10-11 11:53:28.603819] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:44.115 [2024-10-11 11:53:28.603827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:44.115 request: 00:19:44.115 { 00:19:44.115 "name": "TLSTEST", 00:19:44.115 "trtype": "tcp", 00:19:44.115 "traddr": "10.0.0.2", 00:19:44.115 "adrfam": "ipv4", 00:19:44.115 "trsvcid": "4420", 00:19:44.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.115 "prchk_reftag": false, 00:19:44.116 "prchk_guard": false, 00:19:44.116 "hdgst": false, 00:19:44.116 "ddgst": false, 00:19:44.116 "psk": "key0", 00:19:44.116 "allow_unrecognized_csi": false, 00:19:44.116 "method": "bdev_nvme_attach_controller", 00:19:44.116 "req_id": 1 00:19:44.116 } 00:19:44.116 Got JSON-RPC error response 00:19:44.116 response: 00:19:44.116 { 00:19:44.116 "code": -5, 00:19:44.116 "message": "Input/output error" 00:19:44.116 } 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1035534 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1035534 ']' 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1035534 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035534 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035534' 00:19:44.116 killing process with pid 1035534 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1035534 00:19:44.116 Received shutdown signal, test time was about 10.000000 seconds 00:19:44.116 00:19:44.116 Latency(us) 00:19:44.116 [2024-10-11T09:53:28.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.116 [2024-10-11T09:53:28.748Z] =================================================================================================================== 00:19:44.116 [2024-10-11T09:53:28.748Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:44.116 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1035534 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Q2VgrEBoxQ 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Q2VgrEBoxQ 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Q2VgrEBoxQ 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q2VgrEBoxQ 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1035724 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1035724 /var/tmp/bdevperf.sock 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1035724 ']' 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.377 [2024-10-11 11:53:28.816103] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:44.377 [2024-10-11 11:53:28.816146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035724 ] 00:19:44.377 [2024-10-11 11:53:28.884944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.377 [2024-10-11 11:53:28.913420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:44.377 11:53:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q2VgrEBoxQ 00:19:44.637 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:44.899 [2024-10-11 11:53:29.314579] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.899 [2024-10-11 11:53:29.323006] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:44.899 [2024-10-11 11:53:29.323027] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:44.899 [2024-10-11 11:53:29.323045] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:44.899 [2024-10-11 11:53:29.323755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff0b0 (107): Transport endpoint is not connected 00:19:44.899 [2024-10-11 11:53:29.324751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ff0b0 (9): Bad file descriptor 00:19:44.899 [2024-10-11 11:53:29.325752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:44.899 [2024-10-11 11:53:29.325759] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:44.899 [2024-10-11 11:53:29.325764] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:44.899 [2024-10-11 11:53:29.325772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:44.899 request: 00:19:44.899 { 00:19:44.899 "name": "TLSTEST", 00:19:44.899 "trtype": "tcp", 00:19:44.899 "traddr": "10.0.0.2", 00:19:44.899 "adrfam": "ipv4", 00:19:44.899 "trsvcid": "4420", 00:19:44.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.899 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:44.899 "prchk_reftag": false, 00:19:44.899 "prchk_guard": false, 00:19:44.899 "hdgst": false, 00:19:44.899 "ddgst": false, 00:19:44.899 "psk": "key0", 00:19:44.899 "allow_unrecognized_csi": false, 00:19:44.899 "method": "bdev_nvme_attach_controller", 00:19:44.899 "req_id": 1 00:19:44.899 } 00:19:44.899 Got JSON-RPC error response 00:19:44.899 response: 00:19:44.899 { 00:19:44.899 "code": -5, 00:19:44.899 "message": "Input/output error" 00:19:44.899 } 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1035724 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1035724 ']' 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1035724 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035724 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035724' 00:19:44.899 killing process with pid 1035724 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1035724 00:19:44.899 Received shutdown signal, test time was about 10.000000 seconds 00:19:44.899 00:19:44.899 Latency(us) 00:19:44.899 [2024-10-11T09:53:29.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.899 [2024-10-11T09:53:29.531Z] =================================================================================================================== 00:19:44.899 [2024-10-11T09:53:29.531Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1035724 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q2VgrEBoxQ 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q2VgrEBoxQ 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Q2VgrEBoxQ 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Q2VgrEBoxQ 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1035890 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1035890 /var/tmp/bdevperf.sock 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1035890 ']' 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.899 11:53:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.160 [2024-10-11 11:53:29.540117] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:45.160 [2024-10-11 11:53:29.540172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035890 ] 00:19:45.160 [2024-10-11 11:53:29.616812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.160 [2024-10-11 11:53:29.644571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.731 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.731 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:45.731 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Q2VgrEBoxQ 00:19:45.992 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.253 [2024-10-11 11:53:30.662711] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.253 [2024-10-11 11:53:30.667297] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:46.253 [2024-10-11 11:53:30.667318] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:46.253 [2024-10-11 11:53:30.667335] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:46.253 [2024-10-11 11:53:30.667992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d370b0 (107): Transport endpoint is not connected 00:19:46.253 [2024-10-11 11:53:30.668988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d370b0 (9): Bad file descriptor 00:19:46.253 [2024-10-11 11:53:30.669989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:46.253 [2024-10-11 11:53:30.669997] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:46.253 [2024-10-11 11:53:30.670003] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:46.253 [2024-10-11 11:53:30.670012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:46.253 request: 00:19:46.253 { 00:19:46.253 "name": "TLSTEST", 00:19:46.253 "trtype": "tcp", 00:19:46.253 "traddr": "10.0.0.2", 00:19:46.253 "adrfam": "ipv4", 00:19:46.253 "trsvcid": "4420", 00:19:46.253 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:46.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.253 "prchk_reftag": false, 00:19:46.253 "prchk_guard": false, 00:19:46.253 "hdgst": false, 00:19:46.253 "ddgst": false, 00:19:46.253 "psk": "key0", 00:19:46.253 "allow_unrecognized_csi": false, 00:19:46.253 "method": "bdev_nvme_attach_controller", 00:19:46.253 "req_id": 1 00:19:46.253 } 00:19:46.253 Got JSON-RPC error response 00:19:46.253 response: 00:19:46.253 { 00:19:46.253 "code": -5, 00:19:46.253 "message": "Input/output error" 00:19:46.253 } 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1035890 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1035890 ']' 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1035890 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1035890 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1035890' 00:19:46.253 killing process with pid 1035890 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1035890 00:19:46.253 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.253 00:19:46.253 Latency(us) 00:19:46.253 [2024-10-11T09:53:30.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.253 [2024-10-11T09:53:30.885Z] =================================================================================================================== 00:19:46.253 [2024-10-11T09:53:30.885Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1035890 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1036229 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1036229 /var/tmp/bdevperf.sock 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1036229 ']' 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.253 11:53:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.514 [2024-10-11 11:53:30.909896] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:46.514 [2024-10-11 11:53:30.909951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036229 ] 00:19:46.514 [2024-10-11 11:53:30.988649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.514 [2024-10-11 11:53:31.017415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.085 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.085 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.085 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:47.347 [2024-10-11 11:53:31.863219] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:47.347 [2024-10-11 11:53:31.863246] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:47.347 request: 00:19:47.347 { 00:19:47.347 "name": "key0", 00:19:47.347 "path": "", 00:19:47.347 "method": "keyring_file_add_key", 00:19:47.347 "req_id": 1 00:19:47.347 } 00:19:47.347 Got JSON-RPC error response 00:19:47.347 response: 00:19:47.347 { 00:19:47.347 "code": -1, 00:19:47.347 "message": "Operation not permitted" 00:19:47.347 } 00:19:47.347 11:53:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:47.608 [2024-10-11 11:53:32.039748] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.608 [2024-10-11 11:53:32.039769] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:47.608 request: 00:19:47.608 { 00:19:47.608 "name": "TLSTEST", 00:19:47.608 "trtype": "tcp", 00:19:47.608 "traddr": "10.0.0.2", 00:19:47.608 "adrfam": "ipv4", 00:19:47.608 "trsvcid": "4420", 00:19:47.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.608 "prchk_reftag": false, 00:19:47.608 "prchk_guard": false, 00:19:47.608 "hdgst": false, 00:19:47.608 "ddgst": false, 00:19:47.608 "psk": "key0", 00:19:47.608 "allow_unrecognized_csi": false, 00:19:47.608 "method": "bdev_nvme_attach_controller", 00:19:47.608 "req_id": 1 00:19:47.608 } 00:19:47.608 Got JSON-RPC error response 00:19:47.608 response: 00:19:47.608 { 00:19:47.608 "code": -126, 00:19:47.608 "message": "Required key not available" 00:19:47.608 } 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1036229 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1036229 ']' 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1036229 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1036229 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1036229' 00:19:47.608 killing process with pid 1036229 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1036229 00:19:47.608 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.608 00:19:47.608 Latency(us) 00:19:47.608 [2024-10-11T09:53:32.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.608 [2024-10-11T09:53:32.240Z] =================================================================================================================== 00:19:47.608 [2024-10-11T09:53:32.240Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1036229 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1030457 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1030457 ']' 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1030457 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.608 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1030457 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1030457' 00:19:47.869 killing process with pid 1030457 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1030457 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1030457 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.xGKfsFPR5w 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.xGKfsFPR5w 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1036539 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1036539 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1036539 ']' 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:47.869 11:53:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.130 [2024-10-11 11:53:32.514748] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:48.130 [2024-10-11 11:53:32.514807] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.130 [2024-10-11 11:53:32.598781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.130 [2024-10-11 11:53:32.635050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.130 [2024-10-11 11:53:32.635091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.130 [2024-10-11 11:53:32.635097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.130 [2024-10-11 11:53:32.635102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.130 [2024-10-11 11:53:32.635106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.130 [2024-10-11 11:53:32.635624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.700 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.700 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:48.700 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:48.700 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:48.700 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.960 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.960 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.xGKfsFPR5w 00:19:48.960 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xGKfsFPR5w 00:19:48.960 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:48.960 [2024-10-11 11:53:33.522882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.960 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:49.221 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.487 [2024-10-11 11:53:33.891791] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.487 [2024-10-11 11:53:33.891976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.487 11:53:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:49.487 malloc0 00:19:49.487 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.781 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xGKfsFPR5w 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xGKfsFPR5w 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xGKfsFPR5w 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1036957 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1036957 /var/tmp/bdevperf.sock 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1036957 ']' 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.088 11:53:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.380 [2024-10-11 11:53:34.708129] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:19:50.380 [2024-10-11 11:53:34.708181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036957 ] 00:19:50.380 [2024-10-11 11:53:34.783382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.380 [2024-10-11 11:53:34.812371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.991 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.991 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:50.991 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGKfsFPR5w 00:19:51.252 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.252 [2024-10-11 11:53:35.830696] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.512 TLSTESTn1 00:19:51.512 11:53:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:51.512 Running I/O for 10 seconds... 00:19:53.393 6477.00 IOPS, 25.30 MiB/s [2024-10-11T09:53:39.408Z] 6522.00 IOPS, 25.48 MiB/s [2024-10-11T09:53:40.348Z] 6563.00 IOPS, 25.64 MiB/s [2024-10-11T09:53:41.290Z] 6521.00 IOPS, 25.47 MiB/s [2024-10-11T09:53:42.233Z] 6536.60 IOPS, 25.53 MiB/s [2024-10-11T09:53:43.173Z] 6532.33 IOPS, 25.52 MiB/s [2024-10-11T09:53:44.112Z] 6537.29 IOPS, 25.54 MiB/s [2024-10-11T09:53:45.052Z] 6537.25 IOPS, 25.54 MiB/s [2024-10-11T09:53:46.435Z] 6542.33 IOPS, 25.56 MiB/s [2024-10-11T09:53:46.435Z] 6530.00 IOPS, 25.51 MiB/s 00:20:01.803 Latency(us) 00:20:01.803 [2024-10-11T09:53:46.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.803 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.803 Verification LBA range: start 0x0 length 0x2000 00:20:01.803 TLSTESTn1 : 10.01 6533.40 25.52 0.00 0.00 19561.90 5652.48 24685.23 00:20:01.803 [2024-10-11T09:53:46.435Z] =================================================================================================================== 00:20:01.803 [2024-10-11T09:53:46.435Z] Total : 6533.40 25.52 0.00 0.00 19561.90 5652.48 24685.23 00:20:01.803 { 00:20:01.803 "results": [ 00:20:01.803 { 00:20:01.803 "job": "TLSTESTn1", 00:20:01.803 "core_mask": "0x4", 00:20:01.803 "workload": "verify", 00:20:01.803 "status": "finished", 00:20:01.803 "verify_range": { 00:20:01.803 "start": 0, 00:20:01.803 "length": 8192 00:20:01.803 }, 00:20:01.803 "queue_depth": 128, 00:20:01.803 "io_size": 4096, 00:20:01.803 "runtime": 10.013926, 00:20:01.803 "iops": 6533.401584952795, 00:20:01.803 "mibps": 25.521099941221856, 00:20:01.803 "io_failed": 0, 00:20:01.803 "io_timeout": 0, 00:20:01.803 "avg_latency_us": 19561.895399414087, 00:20:01.803 "min_latency_us": 5652.48, 00:20:01.803 "max_latency_us": 24685.226666666666 00:20:01.803 } 00:20:01.803 ], 00:20:01.803 "core_count": 1 00:20:01.803 } 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1036957 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1036957 ']' 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1036957 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1036957 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1036957' 00:20:01.803 killing process with pid 1036957 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1036957 00:20:01.803 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.803 00:20:01.803 Latency(us) 00:20:01.803 [2024-10-11T09:53:46.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.803 [2024-10-11T09:53:46.435Z] =================================================================================================================== 00:20:01.803 [2024-10-11T09:53:46.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1036957 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.xGKfsFPR5w 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xGKfsFPR5w 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xGKfsFPR5w 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:01.803 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xGKfsFPR5w 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xGKfsFPR5w 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1039182 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1039182 /var/tmp/bdevperf.sock 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1039182 ']' 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.804 11:53:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.804 [2024-10-11 11:53:46.296950] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:01.804 [2024-10-11 11:53:46.297006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039182 ] 00:20:01.804 [2024-10-11 11:53:46.374902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.804 [2024-10-11 11:53:46.403477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.744 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.744 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:02.744 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGKfsFPR5w 00:20:02.744 [2024-10-11 11:53:47.241466] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xGKfsFPR5w': 0100666 00:20:02.744 [2024-10-11 11:53:47.241493] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:02.744 request: 00:20:02.744 { 00:20:02.744 "name": "key0", 00:20:02.744 "path": "/tmp/tmp.xGKfsFPR5w", 00:20:02.744 "method": "keyring_file_add_key", 00:20:02.744 "req_id": 1 00:20:02.744 } 00:20:02.744 Got JSON-RPC error response 00:20:02.744 response: 00:20:02.744 { 00:20:02.744 "code": -1, 00:20:02.744 "message": "Operation not permitted" 00:20:02.744 } 00:20:02.744 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:03.004 [2024-10-11 11:53:47.421992] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.004 [2024-10-11 11:53:47.422017] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:03.004 request: 00:20:03.004 { 00:20:03.004 "name": "TLSTEST", 00:20:03.004 "trtype": "tcp", 00:20:03.004 "traddr": "10.0.0.2", 00:20:03.004 "adrfam": "ipv4", 00:20:03.004 "trsvcid": "4420", 00:20:03.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.004 "prchk_reftag": false, 00:20:03.004 "prchk_guard": false, 00:20:03.004 "hdgst": false, 00:20:03.004 "ddgst": false, 00:20:03.005 "psk": "key0", 00:20:03.005 "allow_unrecognized_csi": false, 00:20:03.005 "method": "bdev_nvme_attach_controller", 00:20:03.005 "req_id": 1 00:20:03.005 } 00:20:03.005 Got JSON-RPC error response 00:20:03.005 response: 00:20:03.005 { 00:20:03.005 "code": -126, 00:20:03.005 "message": "Required key not available" 00:20:03.005 } 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1039182 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1039182 ']' 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1039182 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1039182 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1039182' 00:20:03.005 killing process with pid 1039182 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1039182 00:20:03.005 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.005 00:20:03.005 Latency(us) 00:20:03.005 [2024-10-11T09:53:47.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.005 [2024-10-11T09:53:47.637Z] =================================================================================================================== 00:20:03.005 [2024-10-11T09:53:47.637Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1039182 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1036539 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1036539 ']' 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1036539 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.005 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1036539 00:20:03.265 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:03.265 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:03.265 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1036539' 00:20:03.265 killing process with pid 1036539 00:20:03.265 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1036539 00:20:03.265 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1036539 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1039448 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1039448 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1039448 ']' 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.266 11:53:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.266 [2024-10-11 11:53:47.844018] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:03.266 [2024-10-11 11:53:47.844073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.526 [2024-10-11 11:53:47.925582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.526 [2024-10-11 11:53:47.957819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.526 [2024-10-11 11:53:47.957854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.526 [2024-10-11 11:53:47.957861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.526 [2024-10-11 11:53:47.957866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.526 [2024-10-11 11:53:47.957870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.526 [2024-10-11 11:53:47.958355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.xGKfsFPR5w 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.xGKfsFPR5w 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.xGKfsFPR5w 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xGKfsFPR5w 00:20:04.097 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:04.358 [2024-10-11 11:53:48.848637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.358 11:53:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:04.619 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:04.619 [2024-10-11 11:53:49.217544] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.619 [2024-10-11 11:53:49.217736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.619 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:04.880 malloc0 00:20:04.880 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:05.141 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xGKfsFPR5w 00:20:05.141 [2024-10-11 11:53:49.760546] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xGKfsFPR5w': 0100666 00:20:05.141 [2024-10-11 11:53:49.760566] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:05.141 request: 00:20:05.141 { 00:20:05.141 "name": "key0", 00:20:05.141 "path": "/tmp/tmp.xGKfsFPR5w", 00:20:05.141 "method": "keyring_file_add_key", 00:20:05.141 "req_id": 1 00:20:05.141 } 00:20:05.141 Got JSON-RPC error response 00:20:05.141 response: 00:20:05.141 { 00:20:05.141 "code": -1, 00:20:05.141 "message": "Operation not permitted" 00:20:05.141 } 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.402 [2024-10-11 11:53:49.945031] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:05.402 [2024-10-11 11:53:49.945060] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:05.402 request: 00:20:05.402 { 00:20:05.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.402 "host": "nqn.2016-06.io.spdk:host1", 00:20:05.402 "psk": "key0", 00:20:05.402 "method": "nvmf_subsystem_add_host", 00:20:05.402 "req_id": 1 00:20:05.402 } 00:20:05.402 Got JSON-RPC error response 00:20:05.402 response: 00:20:05.402 { 00:20:05.402 "code": -32603, 00:20:05.402 "message": "Internal error" 00:20:05.402 } 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1039448 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1039448 ']' 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1039448 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.402 11:53:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1039448 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1039448' 00:20:05.663 killing process with pid 1039448 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1039448 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1039448 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.xGKfsFPR5w 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1040029 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1040029 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1040029 ']' 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.663 11:53:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.663 [2024-10-11 11:53:50.217300] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:05.663 [2024-10-11 11:53:50.217352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.923 [2024-10-11 11:53:50.300966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.923 [2024-10-11 11:53:50.329554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.923 [2024-10-11 11:53:50.329585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.923 [2024-10-11 11:53:50.329590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.923 [2024-10-11 11:53:50.329595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.923 [2024-10-11 11:53:50.329599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.923 [2024-10-11 11:53:50.330078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.495 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.495 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:06.495 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:06.495 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.495 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.495 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.495 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.xGKfsFPR5w 00:20:06.495 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xGKfsFPR5w 00:20:06.495 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:06.755 [2024-10-11 11:53:51.213282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.755 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:07.015 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:07.015 [2024-10-11 11:53:51.550105] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.015 [2024-10-11 11:53:51.550320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.015 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:07.275 malloc0 00:20:07.275 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:07.275 11:53:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xGKfsFPR5w 00:20:07.536 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1040393 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1040393 /var/tmp/bdevperf.sock 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1040393 ']' 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.796 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.796 [2024-10-11 11:53:52.264684] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:07.796 [2024-10-11 11:53:52.264736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040393 ] 00:20:07.796 [2024-10-11 11:53:52.342405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.796 [2024-10-11 11:53:52.371597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.056 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.056 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:08.056 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGKfsFPR5w 00:20:08.056 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:08.317 [2024-10-11 11:53:52.748512] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.317 TLSTESTn1 00:20:08.317 11:53:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:08.579 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:08.579 "subsystems": [ 00:20:08.579 { 00:20:08.579 "subsystem": "keyring", 00:20:08.579 "config": [ 00:20:08.579 { 00:20:08.579 "method": "keyring_file_add_key", 00:20:08.579 "params": { 00:20:08.579 "name": "key0", 00:20:08.579 "path": "/tmp/tmp.xGKfsFPR5w" 00:20:08.579 } 00:20:08.579 } 00:20:08.579 ] 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "subsystem": "iobuf", 00:20:08.579 "config": [ 00:20:08.579 { 00:20:08.579 "method": "iobuf_set_options", 00:20:08.579 "params": { 00:20:08.579 "small_pool_count": 8192, 00:20:08.579 "large_pool_count": 1024, 00:20:08.579 "small_bufsize": 8192, 00:20:08.579 "large_bufsize": 135168 00:20:08.579 } 00:20:08.579 } 00:20:08.579 ] 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "subsystem": "sock", 00:20:08.579 "config": [ 00:20:08.579 { 00:20:08.579 "method": "sock_set_default_impl", 00:20:08.579 "params": { 00:20:08.579 "impl_name": "posix" 00:20:08.579 } 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "method": "sock_impl_set_options", 00:20:08.579 "params": { 00:20:08.579 "impl_name": "ssl", 00:20:08.579 "recv_buf_size": 4096, 00:20:08.579 "send_buf_size": 4096, 00:20:08.579 "enable_recv_pipe": true, 00:20:08.579 "enable_quickack": false, 00:20:08.579 "enable_placement_id": 0, 00:20:08.579 "enable_zerocopy_send_server": true, 00:20:08.579 "enable_zerocopy_send_client": false, 00:20:08.579 "zerocopy_threshold": 0, 00:20:08.579 "tls_version": 0, 00:20:08.579 "enable_ktls": false 00:20:08.579 } 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "method": "sock_impl_set_options", 00:20:08.579 "params": { 00:20:08.579 "impl_name": "posix", 00:20:08.579 "recv_buf_size": 2097152, 00:20:08.579 "send_buf_size": 2097152, 00:20:08.579 "enable_recv_pipe": true, 00:20:08.579 "enable_quickack": false, 00:20:08.579 "enable_placement_id": 0, 00:20:08.579 "enable_zerocopy_send_server": true, 00:20:08.579 "enable_zerocopy_send_client": false, 00:20:08.579 "zerocopy_threshold": 0, 00:20:08.579 "tls_version": 0, 00:20:08.579 "enable_ktls": false 00:20:08.579 } 00:20:08.579 } 00:20:08.579 ] 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "subsystem": "vmd", 00:20:08.579 "config": [] 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "subsystem": "accel", 00:20:08.579 "config": [ 00:20:08.579 { 00:20:08.579 "method": "accel_set_options", 00:20:08.579 "params": { 00:20:08.579 "small_cache_size": 128, 00:20:08.579 "large_cache_size": 16, 00:20:08.579 "task_count": 2048, 00:20:08.579 "sequence_count": 2048, 00:20:08.579 "buf_count": 2048 00:20:08.579 } 00:20:08.579 } 00:20:08.579 ] 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "subsystem": "bdev", 00:20:08.579 "config": [ 00:20:08.579 { 00:20:08.579 "method": "bdev_set_options", 00:20:08.579 "params": { 00:20:08.579 "bdev_io_pool_size": 65535, 00:20:08.579 "bdev_io_cache_size": 256, 00:20:08.579 "bdev_auto_examine": true, 00:20:08.579 "iobuf_small_cache_size": 128, 00:20:08.579 "iobuf_large_cache_size": 16 00:20:08.579 } 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "method": "bdev_raid_set_options", 00:20:08.579 "params": { 00:20:08.579 "process_window_size_kb": 1024, 00:20:08.579 "process_max_bandwidth_mb_sec": 0 00:20:08.579 } 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "method": "bdev_iscsi_set_options", 00:20:08.579 "params": { 00:20:08.579 "timeout_sec": 30 00:20:08.579 } 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "method": "bdev_nvme_set_options", 00:20:08.579 "params": { 00:20:08.579 "action_on_timeout": "none", 00:20:08.579 "timeout_us": 0, 00:20:08.579 "timeout_admin_us": 0, 00:20:08.579 "keep_alive_timeout_ms": 10000, 00:20:08.579 "arbitration_burst": 0, 00:20:08.579 "low_priority_weight": 0, 00:20:08.579 "medium_priority_weight": 0, 00:20:08.579 "high_priority_weight": 0, 00:20:08.579 "nvme_adminq_poll_period_us": 10000, 00:20:08.579 "nvme_ioq_poll_period_us": 0, 00:20:08.579 "io_queue_requests": 0, 00:20:08.579 "delay_cmd_submit": true, 00:20:08.579 "transport_retry_count": 4, 00:20:08.579 "bdev_retry_count": 3, 00:20:08.579 "transport_ack_timeout": 0, 00:20:08.579 "ctrlr_loss_timeout_sec": 0, 00:20:08.579 "reconnect_delay_sec": 0, 00:20:08.579 "fast_io_fail_timeout_sec": 0, 00:20:08.579 "disable_auto_failback": false, 00:20:08.579 "generate_uuids": false, 00:20:08.579 "transport_tos": 0, 00:20:08.579 "nvme_error_stat": false, 00:20:08.579 "rdma_srq_size": 0, 00:20:08.579 "io_path_stat": false, 00:20:08.579 "allow_accel_sequence": false, 00:20:08.579 "rdma_max_cq_size": 0, 00:20:08.579 "rdma_cm_event_timeout_ms": 0, 00:20:08.579 "dhchap_digests": [ 00:20:08.579 "sha256", 00:20:08.579 "sha384", 00:20:08.579 "sha512" 00:20:08.579 ], 00:20:08.579 "dhchap_dhgroups": [ 00:20:08.579 "null", 00:20:08.579 "ffdhe2048", 00:20:08.579 "ffdhe3072", 00:20:08.579 "ffdhe4096", 00:20:08.579 "ffdhe6144", 00:20:08.579 "ffdhe8192" 00:20:08.579 ] 00:20:08.579 } 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "method": "bdev_nvme_set_hotplug", 00:20:08.579 "params": { 00:20:08.579 "period_us": 100000, 00:20:08.579 "enable": false 00:20:08.579 } 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "method": "bdev_malloc_create", 00:20:08.579 "params": { 00:20:08.579 "name": "malloc0", 00:20:08.579 "num_blocks": 8192, 00:20:08.579 "block_size": 4096, 00:20:08.579 "physical_block_size": 4096, 00:20:08.579 "uuid": "18724018-8d19-4baf-afd6-1da8ea972e27", 00:20:08.579 "optimal_io_boundary": 0, 00:20:08.579 "md_size": 0, 00:20:08.579 "dif_type": 0, 00:20:08.579 "dif_is_head_of_md": false, 00:20:08.579 "dif_pi_format": 0 00:20:08.579 } 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "method": "bdev_wait_for_examine" 00:20:08.579 } 00:20:08.579 ] 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "subsystem": "nbd", 00:20:08.579 "config": [] 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "subsystem": "scheduler", 00:20:08.579 "config": [ 00:20:08.579 { 00:20:08.579 "method": "framework_set_scheduler", 00:20:08.579 "params": { 00:20:08.579 "name": "static" 00:20:08.579 } 00:20:08.579 } 00:20:08.579 ] 00:20:08.579 }, 00:20:08.579 { 00:20:08.579 "subsystem": "nvmf", 00:20:08.579 "config": [ 00:20:08.579 { 00:20:08.579 "method": "nvmf_set_config", 00:20:08.579 "params": { 00:20:08.579 "discovery_filter": "match_any", 00:20:08.579 "admin_cmd_passthru": { 00:20:08.580 "identify_ctrlr": false 00:20:08.580 }, 00:20:08.580 "dhchap_digests": [ 00:20:08.580 "sha256", 00:20:08.580 "sha384", 00:20:08.580 "sha512" 00:20:08.580 ], 00:20:08.580 "dhchap_dhgroups": [ 00:20:08.580 "null", 00:20:08.580 "ffdhe2048", 00:20:08.580 "ffdhe3072", 00:20:08.580 "ffdhe4096", 00:20:08.580 "ffdhe6144", 00:20:08.580 "ffdhe8192" 00:20:08.580 ] 00:20:08.580 } 00:20:08.580 }, 00:20:08.580 { 00:20:08.580 "method": "nvmf_set_max_subsystems", 00:20:08.580 "params": { 00:20:08.580 "max_subsystems": 1024 00:20:08.580 } 00:20:08.580 }, 00:20:08.580 { 00:20:08.580 "method": "nvmf_set_crdt", 00:20:08.580 "params": { 00:20:08.580 "crdt1": 0, 00:20:08.580 "crdt2": 0, 00:20:08.580 "crdt3": 0 00:20:08.580 } 00:20:08.580 }, 00:20:08.580 { 00:20:08.580 "method": "nvmf_create_transport", 00:20:08.580 "params": { 00:20:08.580 "trtype": "TCP", 00:20:08.580 "max_queue_depth": 128, 00:20:08.580 "max_io_qpairs_per_ctrlr": 127, 00:20:08.580 "in_capsule_data_size": 4096, 00:20:08.580 "max_io_size": 131072, 00:20:08.580 "io_unit_size": 131072, 00:20:08.580 "max_aq_depth": 128, 00:20:08.580 "num_shared_buffers": 511, 00:20:08.580 "buf_cache_size": 4294967295, 00:20:08.580 "dif_insert_or_strip": false, 00:20:08.580 "zcopy": false, 00:20:08.580 "c2h_success": false, 00:20:08.580 "sock_priority": 0, 00:20:08.580 "abort_timeout_sec": 1, 00:20:08.580 "ack_timeout": 0, 00:20:08.580 "data_wr_pool_size": 0 00:20:08.580 } 00:20:08.580 }, 00:20:08.580 { 00:20:08.580 "method": "nvmf_create_subsystem", 00:20:08.580 "params": { 00:20:08.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.580 "allow_any_host": false, 00:20:08.580 "serial_number": "SPDK00000000000001", 00:20:08.580 "model_number": "SPDK bdev Controller", 00:20:08.580 "max_namespaces": 10, 00:20:08.580 "min_cntlid": 1, 00:20:08.580 "max_cntlid": 65519, 00:20:08.580 "ana_reporting": false 00:20:08.580 } 00:20:08.580 }, 00:20:08.580 { 00:20:08.580 "method": "nvmf_subsystem_add_host", 00:20:08.580 "params": { 00:20:08.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.580 "host": "nqn.2016-06.io.spdk:host1", 00:20:08.580 "psk": "key0" 00:20:08.580 } 00:20:08.580 }, 00:20:08.580 { 00:20:08.580 "method": "nvmf_subsystem_add_ns", 00:20:08.580 "params": { 00:20:08.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.580 "namespace": { 00:20:08.580 "nsid": 1, 00:20:08.580 "bdev_name": "malloc0", 00:20:08.580 "nguid": "187240188D194BAFAFD61DA8EA972E27", 00:20:08.580 "uuid": "18724018-8d19-4baf-afd6-1da8ea972e27", 00:20:08.580 "no_auto_visible": false 00:20:08.580 } 00:20:08.580 } 00:20:08.580 }, 00:20:08.580 { 00:20:08.580 "method": "nvmf_subsystem_add_listener", 00:20:08.580 "params": { 00:20:08.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.580 "listen_address": { 00:20:08.580 "trtype": "TCP", 00:20:08.580 "adrfam": "IPv4", 00:20:08.580 "traddr": "10.0.0.2", 00:20:08.580 "trsvcid": "4420" 00:20:08.580 }, 00:20:08.580 "secure_channel": true 00:20:08.580 } 00:20:08.580 } 00:20:08.580 ] 00:20:08.580 } 00:20:08.580 ] 00:20:08.580 }' 00:20:08.580 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:08.841 "subsystems": [ 00:20:08.841 { 00:20:08.841 "subsystem": "keyring", 00:20:08.841 "config": [ 00:20:08.841 { 00:20:08.841 "method": "keyring_file_add_key", 00:20:08.841 "params": { 00:20:08.841 "name": "key0", 00:20:08.841 "path": "/tmp/tmp.xGKfsFPR5w" 00:20:08.841 } 00:20:08.841 } 00:20:08.841 ] 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "subsystem": "iobuf", 00:20:08.841 "config": [ 00:20:08.841 { 00:20:08.841 "method": "iobuf_set_options", 00:20:08.841 "params": { 00:20:08.841 "small_pool_count": 8192, 00:20:08.841 "large_pool_count": 1024, 00:20:08.841 "small_bufsize": 8192, 00:20:08.841 "large_bufsize": 135168 00:20:08.841 } 00:20:08.841 } 00:20:08.841 ] 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "subsystem": "sock", 00:20:08.841 "config": [ 00:20:08.841 { 00:20:08.841 "method": "sock_set_default_impl", 00:20:08.841 "params": { 00:20:08.841 "impl_name": "posix" 00:20:08.841 } 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "method": "sock_impl_set_options", 00:20:08.841 "params": { 00:20:08.841 "impl_name": "ssl", 00:20:08.841 "recv_buf_size": 4096, 00:20:08.841 "send_buf_size": 4096, 00:20:08.841 "enable_recv_pipe": true, 00:20:08.841 "enable_quickack": false, 00:20:08.841 "enable_placement_id": 0, 00:20:08.841 "enable_zerocopy_send_server": true, 00:20:08.841 "enable_zerocopy_send_client": false, 00:20:08.841 "zerocopy_threshold": 0, 00:20:08.841 "tls_version": 0, 00:20:08.841 "enable_ktls": false 00:20:08.841 } 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "method": "sock_impl_set_options", 00:20:08.841 "params": { 00:20:08.841 "impl_name": "posix", 00:20:08.841 "recv_buf_size": 2097152, 00:20:08.841 "send_buf_size": 2097152, 00:20:08.841 "enable_recv_pipe": true, 00:20:08.841 "enable_quickack": false, 00:20:08.841 "enable_placement_id": 0, 00:20:08.841 "enable_zerocopy_send_server": true, 00:20:08.841 "enable_zerocopy_send_client": false, 00:20:08.841 "zerocopy_threshold": 0, 00:20:08.841 "tls_version": 0, 00:20:08.841 "enable_ktls": false 00:20:08.841 } 00:20:08.841 } 00:20:08.841 ] 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "subsystem": "vmd", 00:20:08.841 "config": [] 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "subsystem": "accel", 00:20:08.841 "config": [ 00:20:08.841 { 00:20:08.841 "method": "accel_set_options", 00:20:08.841 "params": { 00:20:08.841 "small_cache_size": 128, 00:20:08.841 "large_cache_size": 16, 00:20:08.841 "task_count": 2048, 00:20:08.841 "sequence_count": 2048, 00:20:08.841 "buf_count": 2048 00:20:08.841 } 00:20:08.841 } 00:20:08.841 ] 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "subsystem": "bdev", 00:20:08.841 "config": [ 00:20:08.841 { 00:20:08.841 "method": "bdev_set_options", 00:20:08.841 "params": { 00:20:08.841 "bdev_io_pool_size": 65535, 00:20:08.841 "bdev_io_cache_size": 256, 00:20:08.841 "bdev_auto_examine": true, 00:20:08.841 "iobuf_small_cache_size": 128, 00:20:08.841 "iobuf_large_cache_size": 16 00:20:08.841 } 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "method": "bdev_raid_set_options", 00:20:08.841 "params": { 00:20:08.841 "process_window_size_kb": 1024, 00:20:08.841 "process_max_bandwidth_mb_sec": 0 00:20:08.841 } 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "method": "bdev_iscsi_set_options", 00:20:08.841 "params": { 00:20:08.841 "timeout_sec": 30 00:20:08.841 } 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "method": "bdev_nvme_set_options", 00:20:08.841 "params": { 00:20:08.841 "action_on_timeout": "none", 00:20:08.841 "timeout_us": 0, 00:20:08.841 "timeout_admin_us": 0, 00:20:08.841 "keep_alive_timeout_ms": 10000, 00:20:08.841 "arbitration_burst": 0, 00:20:08.841 "low_priority_weight": 0, 00:20:08.841 "medium_priority_weight": 0, 00:20:08.841 "high_priority_weight": 0, 00:20:08.841 "nvme_adminq_poll_period_us": 10000, 00:20:08.841 "nvme_ioq_poll_period_us": 0, 00:20:08.841 "io_queue_requests": 512, 00:20:08.841 "delay_cmd_submit": true, 00:20:08.841 "transport_retry_count": 4, 00:20:08.841 "bdev_retry_count": 3, 00:20:08.841 "transport_ack_timeout": 0, 00:20:08.841 "ctrlr_loss_timeout_sec": 0, 00:20:08.841 "reconnect_delay_sec": 0, 00:20:08.841 "fast_io_fail_timeout_sec": 0, 00:20:08.841 "disable_auto_failback": false, 00:20:08.841 "generate_uuids": false, 00:20:08.841 "transport_tos": 0, 00:20:08.841 "nvme_error_stat": false, 00:20:08.841 "rdma_srq_size": 0, 00:20:08.841 "io_path_stat": false, 00:20:08.841 "allow_accel_sequence": false, 00:20:08.841 "rdma_max_cq_size": 0, 00:20:08.841 "rdma_cm_event_timeout_ms": 0, 00:20:08.841 "dhchap_digests": [ 00:20:08.841 "sha256", 00:20:08.841 "sha384", 00:20:08.841 "sha512" 00:20:08.841 ], 00:20:08.841 "dhchap_dhgroups": [ 00:20:08.841 "null", 00:20:08.841 "ffdhe2048", 00:20:08.841 "ffdhe3072", 00:20:08.841 "ffdhe4096", 00:20:08.841 "ffdhe6144", 00:20:08.841 "ffdhe8192" 00:20:08.841 ] 00:20:08.841 } 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "method": "bdev_nvme_attach_controller", 00:20:08.841 "params": { 00:20:08.841 "name": "TLSTEST", 00:20:08.841 "trtype": "TCP", 00:20:08.841 "adrfam": "IPv4", 00:20:08.841 "traddr": "10.0.0.2", 00:20:08.841 "trsvcid": "4420", 00:20:08.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.841 "prchk_reftag": false, 00:20:08.841 "prchk_guard": false, 00:20:08.841 "ctrlr_loss_timeout_sec": 0, 00:20:08.841 "reconnect_delay_sec": 0, 00:20:08.841 "fast_io_fail_timeout_sec": 0, 00:20:08.841 "psk": "key0", 00:20:08.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.841 "hdgst": false, 00:20:08.841 "ddgst": false, 00:20:08.841 "multipath": "multipath" 00:20:08.841 } 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "method": "bdev_nvme_set_hotplug", 00:20:08.841 "params": { 00:20:08.841 "period_us": 100000, 00:20:08.841 "enable": false 00:20:08.841 } 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "method": "bdev_wait_for_examine" 00:20:08.841 } 00:20:08.841 ] 00:20:08.841 }, 00:20:08.841 { 00:20:08.841 "subsystem": "nbd", 00:20:08.841 "config": [] 00:20:08.841 } 00:20:08.841 ] 00:20:08.841 }' 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1040393 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1040393 ']' 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1040393 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040393 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040393' 00:20:08.841 killing process with pid 1040393 00:20:08.841 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1040393 00:20:08.841 Received shutdown signal, test time was about 10.000000 seconds 00:20:08.841 00:20:08.841 Latency(us) 00:20:08.841 [2024-10-11T09:53:53.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.841 [2024-10-11T09:53:53.473Z] =================================================================================================================== 00:20:08.841 [2024-10-11T09:53:53.474Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:08.842 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1040393 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1040029 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1040029 ']' 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1040029 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040029 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040029' 00:20:09.103 killing process with pid 1040029 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1040029 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1040029 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.103 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:09.103 "subsystems": [ 00:20:09.103 { 00:20:09.103 "subsystem": "keyring", 00:20:09.103 "config": [ 00:20:09.103 { 00:20:09.103 "method": "keyring_file_add_key", 00:20:09.103 "params": { 00:20:09.103 "name": "key0", 00:20:09.103 "path": "/tmp/tmp.xGKfsFPR5w" 00:20:09.103 } 00:20:09.103 } 00:20:09.103 ] 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "subsystem": "iobuf", 00:20:09.103 "config": [ 00:20:09.103 { 00:20:09.103 "method": "iobuf_set_options", 00:20:09.103 "params": { 00:20:09.103 "small_pool_count": 8192, 00:20:09.103 "large_pool_count": 1024, 00:20:09.103 "small_bufsize": 8192, 00:20:09.103 "large_bufsize": 135168 00:20:09.103 } 00:20:09.103 } 00:20:09.103 ] 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "subsystem": "sock", 00:20:09.103 "config": [ 00:20:09.103 { 00:20:09.103 "method": "sock_set_default_impl", 00:20:09.103 "params": { 00:20:09.103 "impl_name": "posix" 00:20:09.103 } 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "method": "sock_impl_set_options", 00:20:09.103 "params": { 00:20:09.103 "impl_name": "ssl", 00:20:09.103 "recv_buf_size": 4096, 00:20:09.103 "send_buf_size": 4096, 00:20:09.103 "enable_recv_pipe": true, 00:20:09.103 "enable_quickack": false, 00:20:09.103 "enable_placement_id": 0, 00:20:09.103 "enable_zerocopy_send_server": true, 00:20:09.103 "enable_zerocopy_send_client": false, 00:20:09.103 "zerocopy_threshold": 0, 00:20:09.103 "tls_version": 0, 00:20:09.103 "enable_ktls": false 00:20:09.103 } 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "method": "sock_impl_set_options", 00:20:09.103 "params": { 00:20:09.103 "impl_name": "posix", 00:20:09.103 "recv_buf_size": 2097152, 00:20:09.103 "send_buf_size": 2097152, 00:20:09.103 "enable_recv_pipe": true, 00:20:09.103 "enable_quickack": false, 00:20:09.103 "enable_placement_id": 0, 00:20:09.103 "enable_zerocopy_send_server": true, 00:20:09.103 "enable_zerocopy_send_client": false, 00:20:09.103 "zerocopy_threshold": 0, 00:20:09.103 "tls_version": 0, 00:20:09.103 "enable_ktls": false 00:20:09.103 } 00:20:09.103 } 00:20:09.103 ] 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "subsystem": "vmd", 00:20:09.103 "config": [] 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "subsystem": "accel", 00:20:09.103 "config": [ 00:20:09.103 { 00:20:09.103 "method": "accel_set_options", 00:20:09.103 "params": { 00:20:09.103 "small_cache_size": 128, 00:20:09.103 "large_cache_size": 16, 00:20:09.103 "task_count": 2048, 00:20:09.103 "sequence_count": 2048, 00:20:09.103 "buf_count": 2048 00:20:09.103 } 00:20:09.103 } 00:20:09.103 ] 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "subsystem": "bdev", 00:20:09.103 "config": [ 00:20:09.103 { 00:20:09.103 "method": "bdev_set_options", 00:20:09.103 "params": { 00:20:09.103 "bdev_io_pool_size": 65535, 00:20:09.103 "bdev_io_cache_size": 256, 00:20:09.103 "bdev_auto_examine": true, 00:20:09.103 "iobuf_small_cache_size": 128, 00:20:09.103 "iobuf_large_cache_size": 16 00:20:09.103 } 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "method": "bdev_raid_set_options", 00:20:09.103 "params": { 00:20:09.103 "process_window_size_kb": 1024, 00:20:09.103 "process_max_bandwidth_mb_sec": 0 00:20:09.103 } 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "method": "bdev_iscsi_set_options", 00:20:09.103 "params": { 00:20:09.103 "timeout_sec": 30 00:20:09.103 } 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "method": "bdev_nvme_set_options", 00:20:09.103 "params": { 00:20:09.103 "action_on_timeout": "none", 00:20:09.103 "timeout_us": 0, 00:20:09.103 "timeout_admin_us": 0, 00:20:09.103 "keep_alive_timeout_ms": 10000, 00:20:09.103 "arbitration_burst": 0, 00:20:09.103 "low_priority_weight": 0, 00:20:09.103 "medium_priority_weight": 0, 00:20:09.103 "high_priority_weight": 0, 00:20:09.103 "nvme_adminq_poll_period_us": 10000, 00:20:09.103 "nvme_ioq_poll_period_us": 0, 00:20:09.103 "io_queue_requests": 0, 00:20:09.103 "delay_cmd_submit": true, 00:20:09.103 "transport_retry_count": 4, 00:20:09.103 "bdev_retry_count": 3, 00:20:09.103 "transport_ack_timeout": 0, 00:20:09.103 "ctrlr_loss_timeout_sec": 0, 00:20:09.103 "reconnect_delay_sec": 0, 00:20:09.103 "fast_io_fail_timeout_sec": 0, 00:20:09.103 "disable_auto_failback": false, 00:20:09.103 "generate_uuids": false, 00:20:09.103 "transport_tos": 0, 00:20:09.103 "nvme_error_stat": false, 00:20:09.103 "rdma_srq_size": 0, 00:20:09.103 "io_path_stat": false, 00:20:09.103 "allow_accel_sequence": false, 00:20:09.103 "rdma_max_cq_size": 0, 00:20:09.103 "rdma_cm_event_timeout_ms": 0, 00:20:09.103 "dhchap_digests": [ 00:20:09.103 "sha256", 00:20:09.103 "sha384", 00:20:09.103 "sha512" 00:20:09.103 ], 00:20:09.103 "dhchap_dhgroups": [ 00:20:09.103 "null", 00:20:09.103 "ffdhe2048", 00:20:09.103 "ffdhe3072", 00:20:09.103 "ffdhe4096", 00:20:09.103 "ffdhe6144", 00:20:09.103 "ffdhe8192" 00:20:09.103 ] 00:20:09.103 } 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "method": "bdev_nvme_set_hotplug", 00:20:09.103 "params": { 00:20:09.103 "period_us": 100000, 00:20:09.103 "enable": false 00:20:09.103 } 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "method": "bdev_malloc_create", 00:20:09.103 "params": { 00:20:09.103 "name": "malloc0", 00:20:09.103 "num_blocks": 8192, 00:20:09.103 "block_size": 4096, 00:20:09.103 "physical_block_size": 4096, 00:20:09.103 "uuid": "18724018-8d19-4baf-afd6-1da8ea972e27", 00:20:09.103 "optimal_io_boundary": 0, 00:20:09.103 "md_size": 0, 00:20:09.103 "dif_type": 0, 00:20:09.103 "dif_is_head_of_md": false, 00:20:09.103 "dif_pi_format": 0 00:20:09.103 } 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "method": "bdev_wait_for_examine" 00:20:09.103 } 00:20:09.103 ] 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "subsystem": "nbd", 00:20:09.103 "config": [] 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "subsystem": "scheduler", 00:20:09.103 "config": [ 00:20:09.103 { 00:20:09.103 "method": "framework_set_scheduler", 00:20:09.103 "params": { 00:20:09.103 "name": "static" 00:20:09.103 } 00:20:09.103 } 00:20:09.103 ] 00:20:09.103 }, 00:20:09.103 { 00:20:09.103 "subsystem": "nvmf", 00:20:09.103 "config": [ 00:20:09.103 { 00:20:09.103 "method": "nvmf_set_config", 00:20:09.103 "params": { 00:20:09.103 "discovery_filter": "match_any", 00:20:09.103 "admin_cmd_passthru": { 00:20:09.103 "identify_ctrlr": false 00:20:09.103 }, 00:20:09.103 "dhchap_digests": [ 00:20:09.103 "sha256", 00:20:09.103 "sha384", 00:20:09.103 "sha512" 00:20:09.103 ], 00:20:09.103 "dhchap_dhgroups": [ 00:20:09.104 "null", 00:20:09.104 "ffdhe2048", 00:20:09.104 "ffdhe3072", 00:20:09.104 "ffdhe4096", 00:20:09.104 "ffdhe6144", 00:20:09.104 "ffdhe8192" 00:20:09.104 ] 00:20:09.104 } 00:20:09.104 }, 00:20:09.104 { 00:20:09.104 "method": "nvmf_set_max_subsystems", 00:20:09.104 "params": { 00:20:09.104 "max_subsystems": 1024 00:20:09.104 } 00:20:09.104 }, 00:20:09.104 { 00:20:09.104 "method": "nvmf_set_crdt", 00:20:09.104 "params": { 00:20:09.104 "crdt1": 0, 00:20:09.104 "crdt2": 0, 00:20:09.104 "crdt3": 0 00:20:09.104 } 00:20:09.104 }, 00:20:09.104 { 00:20:09.104 "method": "nvmf_create_transport", 00:20:09.104 "params": { 00:20:09.104 "trtype": "TCP", 00:20:09.104 "max_queue_depth": 128, 00:20:09.104 "max_io_qpairs_per_ctrlr": 127, 00:20:09.104 "in_capsule_data_size": 4096, 00:20:09.104 "max_io_size": 131072, 00:20:09.104 "io_unit_size": 131072, 00:20:09.104 "max_aq_depth": 128, 00:20:09.104 "num_shared_buffers": 511, 00:20:09.104 "buf_cache_size": 4294967295, 00:20:09.104 "dif_insert_or_strip": false, 00:20:09.104 "zcopy": false, 00:20:09.104 "c2h_success": false, 00:20:09.104 "sock_priority": 0, 00:20:09.104 "abort_timeout_sec": 1, 00:20:09.104 "ack_timeout": 0, 00:20:09.104 "data_wr_pool_size": 0 00:20:09.104 } 00:20:09.104 }, 00:20:09.104 { 00:20:09.104 "method": "nvmf_create_subsystem", 00:20:09.104 "params": { 00:20:09.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.104 "allow_any_host": false, 00:20:09.104 "serial_number": "SPDK00000000000001", 00:20:09.104 "model_number": "SPDK bdev Controller", 00:20:09.104 "max_namespaces": 10, 00:20:09.104 "min_cntlid": 1, 00:20:09.104 "max_cntlid": 65519, 00:20:09.104 "ana_reporting": false 00:20:09.104 } 00:20:09.104 }, 00:20:09.104 { 00:20:09.104 "method": "nvmf_subsystem_add_host", 00:20:09.104 "params": { 00:20:09.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.104 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.104 "psk": "key0" 00:20:09.104 } 00:20:09.104 }, 00:20:09.104 { 00:20:09.104 "method": "nvmf_subsystem_add_ns", 00:20:09.104 "params": { 00:20:09.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.104 "namespace": { 00:20:09.104 "nsid": 1, 00:20:09.104 "bdev_name": "malloc0", 00:20:09.104 "nguid": "187240188D194BAFAFD61DA8EA972E27", 00:20:09.104 "uuid": "18724018-8d19-4baf-afd6-1da8ea972e27", 00:20:09.104 "no_auto_visible": false 00:20:09.104 } 00:20:09.104 } 00:20:09.104 }, 00:20:09.104 { 00:20:09.104 "method": "nvmf_subsystem_add_listener", 00:20:09.104 "params": { 00:20:09.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.104 "listen_address": { 00:20:09.104 "trtype": "TCP", 00:20:09.104 "adrfam": "IPv4", 00:20:09.104 "traddr": "10.0.0.2", 00:20:09.104 "trsvcid": "4420" 00:20:09.104 }, 00:20:09.104 "secure_channel": true 00:20:09.104 } 00:20:09.104 } 00:20:09.104 ] 00:20:09.104 } 00:20:09.104 ] 00:20:09.104 }' 00:20:09.104 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1040740 00:20:09.104 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1040740 00:20:09.104 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:09.104 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1040740 ']' 00:20:09.104 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.104 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.104 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.104 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.104 11:53:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.364 [2024-10-11 11:53:53.762330] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:09.364 [2024-10-11 11:53:53.762402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.364 [2024-10-11 11:53:53.846337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.364 [2024-10-11 11:53:53.875192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.364 [2024-10-11 11:53:53.875220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.364 [2024-10-11 11:53:53.875225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.364 [2024-10-11 11:53:53.875230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.364 [2024-10-11 11:53:53.875234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.364 [2024-10-11 11:53:53.875698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.624 [2024-10-11 11:53:54.067745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.624 [2024-10-11 11:53:54.099758] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.624 [2024-10-11 11:53:54.099961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1040808 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1040808 /var/tmp/bdevperf.sock 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1040808 ']' 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.196 11:53:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:10.196 "subsystems": [ 00:20:10.196 { 00:20:10.196 "subsystem": "keyring", 00:20:10.196 "config": [ 00:20:10.196 { 00:20:10.196 "method": "keyring_file_add_key", 00:20:10.196 "params": { 00:20:10.196 "name": "key0", 00:20:10.196 "path": "/tmp/tmp.xGKfsFPR5w" 00:20:10.196 } 00:20:10.196 } 00:20:10.196 ] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "iobuf", 00:20:10.196 "config": [ 00:20:10.196 { 00:20:10.196 "method": "iobuf_set_options", 00:20:10.196 "params": { 00:20:10.196 "small_pool_count": 8192, 00:20:10.196 "large_pool_count": 1024, 00:20:10.196 "small_bufsize": 8192, 00:20:10.196 "large_bufsize": 135168 00:20:10.196 } 00:20:10.196 } 00:20:10.196 ] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "sock", 00:20:10.196 "config": [ 00:20:10.196 { 00:20:10.196 "method": "sock_set_default_impl", 00:20:10.196 "params": { 00:20:10.196 "impl_name": "posix" 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "sock_impl_set_options", 00:20:10.196 "params": { 00:20:10.196 "impl_name": "ssl", 00:20:10.196 "recv_buf_size": 4096, 00:20:10.196 "send_buf_size": 4096, 00:20:10.196 "enable_recv_pipe": true, 00:20:10.196 "enable_quickack": false, 00:20:10.196 "enable_placement_id": 0, 00:20:10.196 "enable_zerocopy_send_server": true, 00:20:10.196 "enable_zerocopy_send_client": false, 00:20:10.196 "zerocopy_threshold": 0, 00:20:10.196 "tls_version": 0, 00:20:10.196 "enable_ktls": false 00:20:10.196 } 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "method": "sock_impl_set_options", 00:20:10.196 "params": { 00:20:10.196 "impl_name": "posix", 00:20:10.196 "recv_buf_size": 2097152, 00:20:10.196 "send_buf_size": 2097152, 00:20:10.196 "enable_recv_pipe": true, 00:20:10.196 "enable_quickack": false, 00:20:10.196 "enable_placement_id": 0, 00:20:10.196 "enable_zerocopy_send_server": true, 00:20:10.196 "enable_zerocopy_send_client": false, 00:20:10.196 "zerocopy_threshold": 0, 00:20:10.196 "tls_version": 0, 00:20:10.196 "enable_ktls": false 00:20:10.196 } 00:20:10.196 } 00:20:10.196 ] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "vmd", 00:20:10.196 "config": [] 00:20:10.196 }, 00:20:10.196 { 00:20:10.196 "subsystem": "accel", 00:20:10.196 "config": [ 00:20:10.196 { 00:20:10.196 "method": "accel_set_options", 00:20:10.196 "params": { 00:20:10.196 "small_cache_size": 128, 00:20:10.196 "large_cache_size": 16, 00:20:10.196 "task_count": 2048, 00:20:10.196 "sequence_count": 2048, 00:20:10.196 "buf_count": 2048 00:20:10.196 } 00:20:10.196 } 00:20:10.197 ] 00:20:10.197 }, 00:20:10.197 { 00:20:10.197 "subsystem": "bdev", 00:20:10.197 "config": [ 00:20:10.197 { 00:20:10.197 "method": "bdev_set_options", 00:20:10.197 "params": { 00:20:10.197 "bdev_io_pool_size": 65535, 00:20:10.197 "bdev_io_cache_size": 256, 00:20:10.197 "bdev_auto_examine": true, 00:20:10.197 "iobuf_small_cache_size": 128, 00:20:10.197 "iobuf_large_cache_size": 16 00:20:10.197 } 00:20:10.197 }, 00:20:10.197 { 00:20:10.197 "method": "bdev_raid_set_options", 00:20:10.197 "params": { 00:20:10.197 "process_window_size_kb": 1024, 00:20:10.197 "process_max_bandwidth_mb_sec": 0 00:20:10.197 } 00:20:10.197 }, 00:20:10.197 { 00:20:10.197 "method": "bdev_iscsi_set_options", 00:20:10.197 "params": { 00:20:10.197 "timeout_sec": 30 00:20:10.197 } 00:20:10.197 }, 00:20:10.197 { 00:20:10.197 "method": "bdev_nvme_set_options", 00:20:10.197 "params": { 00:20:10.197 "action_on_timeout": "none", 00:20:10.197 "timeout_us": 0, 00:20:10.197 "timeout_admin_us": 0, 00:20:10.197 "keep_alive_timeout_ms": 10000, 00:20:10.197 "arbitration_burst": 0, 00:20:10.197 "low_priority_weight": 0, 00:20:10.197 "medium_priority_weight": 0, 00:20:10.197 "high_priority_weight": 0, 00:20:10.197 "nvme_adminq_poll_period_us": 10000, 00:20:10.197 "nvme_ioq_poll_period_us": 0, 00:20:10.197 "io_queue_requests": 512, 00:20:10.197 "delay_cmd_submit": true, 00:20:10.197 "transport_retry_count": 4, 00:20:10.197 "bdev_retry_count": 3, 00:20:10.197 "transport_ack_timeout": 0, 00:20:10.197 "ctrlr_loss_timeout_sec": 0, 00:20:10.197 "reconnect_delay_sec": 0, 00:20:10.197 "fast_io_fail_timeout_sec": 0, 00:20:10.197 "disable_auto_failback": false, 00:20:10.197 "generate_uuids": false, 00:20:10.197 "transport_tos": 0, 00:20:10.197 "nvme_error_stat": false, 00:20:10.197 "rdma_srq_size": 0, 00:20:10.197 "io_path_stat": false, 00:20:10.197 "allow_accel_sequence": false, 00:20:10.197 "rdma_max_cq_size": 0, 00:20:10.197 "rdma_cm_event_timeout_ms": 0, 00:20:10.197 "dhchap_digests": [ 00:20:10.197 "sha256", 00:20:10.197 "sha384", 00:20:10.197 "sha512" 00:20:10.197 ], 00:20:10.197 "dhchap_dhgroups": [ 00:20:10.197 "null", 00:20:10.197 "ffdhe2048", 00:20:10.197 "ffdhe3072", 00:20:10.197 "ffdhe4096", 00:20:10.197 "ffdhe6144", 00:20:10.197 "ffdhe8192" 00:20:10.197 ] 00:20:10.197 } 00:20:10.197 }, 00:20:10.197 { 00:20:10.197 "method": "bdev_nvme_attach_controller", 00:20:10.197 "params": { 00:20:10.197 "name": "TLSTEST", 00:20:10.197 "trtype": "TCP", 00:20:10.197 "adrfam": "IPv4", 00:20:10.197 "traddr": "10.0.0.2", 00:20:10.197 "trsvcid": "4420", 00:20:10.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.197 "prchk_reftag": false, 00:20:10.197 "prchk_guard": false, 00:20:10.197 "ctrlr_loss_timeout_sec": 0, 00:20:10.197 "reconnect_delay_sec": 0, 00:20:10.197 "fast_io_fail_timeout_sec": 0, 00:20:10.197 "psk": "key0", 00:20:10.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.197 "hdgst": false, 00:20:10.197 "ddgst": false, 00:20:10.197 "multipath": "multipath" 00:20:10.197 } 00:20:10.197 }, 00:20:10.197 { 00:20:10.197 "method": "bdev_nvme_set_hotplug", 00:20:10.197 "params": { 00:20:10.197 "period_us": 100000, 00:20:10.197 "enable": false 00:20:10.197 } 00:20:10.197 }, 00:20:10.197 { 00:20:10.197 "method": "bdev_wait_for_examine" 00:20:10.197 } 00:20:10.197 ] 00:20:10.197 }, 00:20:10.197 { 00:20:10.197 "subsystem": "nbd", 00:20:10.197 "config": [] 00:20:10.197 } 00:20:10.197 ] 00:20:10.197 }' 00:20:10.197 [2024-10-11 11:53:54.625921] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:10.197 [2024-10-11 11:53:54.625974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040808 ] 00:20:10.197 [2024-10-11 11:53:54.700207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.197 [2024-10-11 11:53:54.729479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.458 [2024-10-11 11:53:54.863221] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.029 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.029 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:11.029 11:53:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:11.029 Running I/O for 10 seconds... 00:20:12.912 4795.00 IOPS, 18.73 MiB/s [2024-10-11T09:53:58.929Z] 5024.00 IOPS, 19.62 MiB/s [2024-10-11T09:53:59.871Z] 5510.33 IOPS, 21.52 MiB/s [2024-10-11T09:54:00.814Z] 5360.00 IOPS, 20.94 MiB/s [2024-10-11T09:54:01.755Z] 5485.20 IOPS, 21.43 MiB/s [2024-10-11T09:54:02.696Z] 5599.33 IOPS, 21.87 MiB/s [2024-10-11T09:54:03.638Z] 5606.71 IOPS, 21.90 MiB/s [2024-10-11T09:54:04.579Z] 5532.25 IOPS, 21.61 MiB/s [2024-10-11T09:54:05.964Z] 5472.11 IOPS, 21.38 MiB/s [2024-10-11T09:54:05.964Z] 5528.20 IOPS, 21.59 MiB/s 00:20:21.332 Latency(us) 00:20:21.332 [2024-10-11T09:54:05.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.332 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:21.332 Verification LBA range: start 0x0 length 0x2000 00:20:21.332 TLSTESTn1 : 10.02 5532.28 21.61 0.00 0.00 23103.13 4587.52 24794.45 00:20:21.332 [2024-10-11T09:54:05.964Z] =================================================================================================================== 00:20:21.332 [2024-10-11T09:54:05.964Z] Total : 5532.28 21.61 0.00 0.00 23103.13 4587.52 24794.45 00:20:21.332 { 00:20:21.332 "results": [ 00:20:21.332 { 00:20:21.332 "job": "TLSTESTn1", 00:20:21.332 "core_mask": "0x4", 00:20:21.332 "workload": "verify", 00:20:21.332 "status": "finished", 00:20:21.332 "verify_range": { 00:20:21.332 "start": 0, 00:20:21.332 "length": 8192 00:20:21.332 }, 00:20:21.332 "queue_depth": 128, 00:20:21.332 "io_size": 4096, 00:20:21.333 "runtime": 10.015219, 00:20:21.333 "iops": 5532.280422425111, 00:20:21.333 "mibps": 21.61047040009809, 00:20:21.333 "io_failed": 0, 00:20:21.333 "io_timeout": 0, 00:20:21.333 "avg_latency_us": 23103.125751860476, 00:20:21.333 "min_latency_us": 4587.52, 00:20:21.333 "max_latency_us": 24794.453333333335 00:20:21.333 } 00:20:21.333 ], 00:20:21.333 "core_count": 1 00:20:21.333 } 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1040808 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1040808 ']' 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1040808 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040808 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040808' 00:20:21.333 killing process with pid 1040808 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1040808 00:20:21.333 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.333 00:20:21.333 Latency(us) 00:20:21.333 [2024-10-11T09:54:05.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.333 [2024-10-11T09:54:05.965Z] =================================================================================================================== 00:20:21.333 [2024-10-11T09:54:05.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1040808 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1040740 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1040740 ']' 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1040740 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1040740 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1040740' 00:20:21.333 killing process with pid 1040740 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1040740 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1040740 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1043220 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1043220 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1043220 ']' 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.333 11:54:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.593 [2024-10-11 11:54:05.984950] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:21.593 [2024-10-11 11:54:05.985012] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.593 [2024-10-11 11:54:06.070396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.593 [2024-10-11 11:54:06.118412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.593 [2024-10-11 11:54:06.118468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.593 [2024-10-11 11:54:06.118476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.593 [2024-10-11 11:54:06.118483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.593 [2024-10-11 11:54:06.118489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.593 [2024-10-11 11:54:06.119267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.165 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.165 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:22.165 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:22.165 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:22.165 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.426 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.426 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.xGKfsFPR5w 00:20:22.426 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xGKfsFPR5w 00:20:22.426 11:54:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:22.426 [2024-10-11 11:54:07.001182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.426 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:22.687 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:22.948 [2024-10-11 11:54:07.374133] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.948 [2024-10-11 11:54:07.374502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.948 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:22.948 malloc0 00:20:23.208 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:23.208 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xGKfsFPR5w 00:20:23.469 11:54:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1043587 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1043587 /var/tmp/bdevperf.sock 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1043587 ']' 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.729 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.729 [2024-10-11 11:54:08.170422] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:23.729 [2024-10-11 11:54:08.170496] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043587 ] 00:20:23.729 [2024-10-11 11:54:08.250335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.729 [2024-10-11 11:54:08.285089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.671 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.671 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:24.671 11:54:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGKfsFPR5w 00:20:24.671 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:24.671 [2024-10-11 11:54:09.271283] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.932 nvme0n1 00:20:24.932 11:54:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:24.932 Running I/O for 1 seconds... 00:20:25.873 5598.00 IOPS, 21.87 MiB/s 00:20:25.873 Latency(us) 00:20:25.873 [2024-10-11T09:54:10.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.873 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:25.873 Verification LBA range: start 0x0 length 0x2000 00:20:25.873 nvme0n1 : 1.02 5632.46 22.00 0.00 0.00 22552.28 7208.96 26760.53 00:20:25.873 [2024-10-11T09:54:10.505Z] =================================================================================================================== 00:20:25.873 [2024-10-11T09:54:10.505Z] Total : 5632.46 22.00 0.00 0.00 22552.28 7208.96 26760.53 00:20:25.873 { 00:20:25.873 "results": [ 00:20:25.873 { 00:20:25.873 "job": "nvme0n1", 00:20:25.873 "core_mask": "0x2", 00:20:25.873 "workload": "verify", 00:20:25.873 "status": "finished", 00:20:25.873 "verify_range": { 00:20:25.873 "start": 0, 00:20:25.873 "length": 8192 00:20:25.873 }, 00:20:25.873 "queue_depth": 128, 00:20:25.873 "io_size": 4096, 00:20:25.873 "runtime": 1.016608, 00:20:25.873 "iops": 5632.456167962479, 00:20:25.873 "mibps": 22.001781906103435, 00:20:25.873 "io_failed": 0, 00:20:25.873 "io_timeout": 0, 00:20:25.873 "avg_latency_us": 22552.28319012691, 00:20:25.873 "min_latency_us": 7208.96, 00:20:25.873 "max_latency_us": 26760.533333333333 00:20:25.873 } 00:20:25.873 ], 00:20:25.873 "core_count": 1 00:20:25.874 } 00:20:25.874 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1043587 00:20:25.874 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1043587 ']' 00:20:25.874 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1043587 00:20:25.874 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:25.874 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.874 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1043587 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1043587' 00:20:26.135 killing process with pid 1043587 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1043587 00:20:26.135 Received shutdown signal, test time was about 1.000000 seconds 00:20:26.135 00:20:26.135 Latency(us) 00:20:26.135 [2024-10-11T09:54:10.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.135 [2024-10-11T09:54:10.767Z] =================================================================================================================== 00:20:26.135 [2024-10-11T09:54:10.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1043587 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1043220 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1043220 ']' 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1043220 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1043220 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1043220' 00:20:26.135 killing process with pid 1043220 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1043220 00:20:26.135 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1043220 00:20:26.397 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:26.397 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1044635 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1044635 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1044635 ']' 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.398 11:54:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.398 [2024-10-11 11:54:10.914729] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:26.398 [2024-10-11 11:54:10.914788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.398 [2024-10-11 11:54:11.000118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.659 [2024-10-11 11:54:11.050041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.659 [2024-10-11 11:54:11.050094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.659 [2024-10-11 11:54:11.050102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.659 [2024-10-11 11:54:11.050109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.659 [2024-10-11 11:54:11.050115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.659 [2024-10-11 11:54:11.051112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.230 [2024-10-11 11:54:11.770999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.230 malloc0 00:20:27.230 [2024-10-11 11:54:11.801091] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.230 [2024-10-11 11:54:11.801442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1044765 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1044765 /var/tmp/bdevperf.sock 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1044765 ']' 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.230 11:54:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.490 [2024-10-11 11:54:11.882254] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:27.490 [2024-10-11 11:54:11.882319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1044765 ] 00:20:27.490 [2024-10-11 11:54:11.961359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.490 [2024-10-11 11:54:11.996743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.490 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:27.490 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:27.490 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGKfsFPR5w 00:20:27.751 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:28.012 [2024-10-11 11:54:12.413605] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.012 nvme0n1 00:20:28.012 11:54:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:28.012 Running I/O for 1 seconds... 00:20:29.395 4327.00 IOPS, 16.90 MiB/s 00:20:29.395 Latency(us) 00:20:29.395 [2024-10-11T09:54:14.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.395 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:29.395 Verification LBA range: start 0x0 length 0x2000 00:20:29.395 nvme0n1 : 1.01 4400.88 17.19 0.00 0.00 28895.07 4642.13 76458.67 00:20:29.395 [2024-10-11T09:54:14.027Z] =================================================================================================================== 00:20:29.395 [2024-10-11T09:54:14.027Z] Total : 4400.88 17.19 0.00 0.00 28895.07 4642.13 76458.67 00:20:29.395 { 00:20:29.395 "results": [ 00:20:29.395 { 00:20:29.395 "job": "nvme0n1", 00:20:29.395 "core_mask": "0x2", 00:20:29.395 "workload": "verify", 00:20:29.395 "status": "finished", 00:20:29.395 "verify_range": { 00:20:29.395 "start": 0, 00:20:29.395 "length": 8192 00:20:29.395 }, 00:20:29.395 "queue_depth": 128, 00:20:29.395 "io_size": 4096, 00:20:29.395 "runtime": 1.012297, 00:20:29.395 "iops": 4400.882349745183, 00:20:29.395 "mibps": 17.19094667869212, 00:20:29.395 "io_failed": 0, 00:20:29.395 "io_timeout": 0, 00:20:29.395 "avg_latency_us": 28895.071868312756, 00:20:29.395 "min_latency_us": 4642.133333333333, 00:20:29.395 "max_latency_us": 76458.66666666667 00:20:29.395 } 00:20:29.395 ], 00:20:29.395 "core_count": 1 00:20:29.395 } 00:20:29.395 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:29.395 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.395 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.395 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.395 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:29.395 "subsystems": [ 00:20:29.395 { 00:20:29.395 "subsystem": "keyring", 00:20:29.395 "config": [ 00:20:29.395 { 00:20:29.395 "method": "keyring_file_add_key", 00:20:29.395 "params": { 00:20:29.395 "name": "key0", 00:20:29.395 "path": "/tmp/tmp.xGKfsFPR5w" 00:20:29.395 } 00:20:29.395 } 00:20:29.395 ] 00:20:29.395 }, 00:20:29.395 { 00:20:29.395 "subsystem": "iobuf", 00:20:29.395 "config": [ 00:20:29.395 { 00:20:29.395 "method": "iobuf_set_options", 00:20:29.395 "params": { 00:20:29.395 "small_pool_count": 8192, 00:20:29.395 "large_pool_count": 1024, 00:20:29.395 "small_bufsize": 8192, 00:20:29.395 "large_bufsize": 135168 00:20:29.395 } 00:20:29.395 } 00:20:29.395 ] 00:20:29.395 }, 00:20:29.395 { 00:20:29.395 "subsystem": "sock", 00:20:29.395 "config": [ 00:20:29.395 { 00:20:29.395 "method": "sock_set_default_impl", 00:20:29.395 "params": { 00:20:29.395 "impl_name": "posix" 00:20:29.395 } 00:20:29.395 }, 00:20:29.395 { 00:20:29.395 "method": "sock_impl_set_options", 00:20:29.395 "params": { 00:20:29.395 "impl_name": "ssl", 00:20:29.395 "recv_buf_size": 4096, 00:20:29.395 "send_buf_size": 4096, 00:20:29.395 "enable_recv_pipe": true, 00:20:29.395 "enable_quickack": false, 00:20:29.395 "enable_placement_id": 0, 00:20:29.395 "enable_zerocopy_send_server": true, 00:20:29.395 "enable_zerocopy_send_client": false, 00:20:29.395 "zerocopy_threshold": 0, 00:20:29.395 "tls_version": 0, 00:20:29.395 "enable_ktls": false 00:20:29.395 } 00:20:29.395 }, 00:20:29.395 { 00:20:29.395 "method": "sock_impl_set_options", 00:20:29.395 "params": { 00:20:29.395 "impl_name": "posix", 00:20:29.395 "recv_buf_size": 2097152, 00:20:29.396 "send_buf_size": 2097152, 00:20:29.396 "enable_recv_pipe": true, 00:20:29.396 "enable_quickack": false, 00:20:29.396 "enable_placement_id": 0, 00:20:29.396 "enable_zerocopy_send_server": true, 00:20:29.396 "enable_zerocopy_send_client": false, 00:20:29.396 "zerocopy_threshold": 0, 00:20:29.396 "tls_version": 0, 00:20:29.396 "enable_ktls": false 00:20:29.396 } 00:20:29.396 } 00:20:29.396 ] 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "subsystem": "vmd", 00:20:29.396 "config": [] 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "subsystem": "accel", 00:20:29.396 "config": [ 00:20:29.396 { 00:20:29.396 "method": "accel_set_options", 00:20:29.396 "params": { 00:20:29.396 "small_cache_size": 128, 00:20:29.396 "large_cache_size": 16, 00:20:29.396 "task_count": 2048, 00:20:29.396 "sequence_count": 2048, 00:20:29.396 "buf_count": 2048 00:20:29.396 } 00:20:29.396 } 00:20:29.396 ] 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "subsystem": "bdev", 00:20:29.396 "config": [ 00:20:29.396 { 00:20:29.396 "method": "bdev_set_options", 00:20:29.396 "params": { 00:20:29.396 "bdev_io_pool_size": 65535, 00:20:29.396 "bdev_io_cache_size": 256, 00:20:29.396 "bdev_auto_examine": true, 00:20:29.396 "iobuf_small_cache_size": 128, 00:20:29.396 "iobuf_large_cache_size": 16 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "bdev_raid_set_options", 00:20:29.396 "params": { 00:20:29.396 "process_window_size_kb": 1024, 00:20:29.396 "process_max_bandwidth_mb_sec": 0 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "bdev_iscsi_set_options", 00:20:29.396 "params": { 00:20:29.396 "timeout_sec": 30 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "bdev_nvme_set_options", 00:20:29.396 "params": { 00:20:29.396 "action_on_timeout": "none", 00:20:29.396 "timeout_us": 0, 00:20:29.396 "timeout_admin_us": 0, 00:20:29.396 "keep_alive_timeout_ms": 10000, 00:20:29.396 "arbitration_burst": 0, 00:20:29.396 "low_priority_weight": 0, 00:20:29.396 "medium_priority_weight": 0, 00:20:29.396 "high_priority_weight": 0, 00:20:29.396 "nvme_adminq_poll_period_us": 10000, 00:20:29.396 "nvme_ioq_poll_period_us": 0, 00:20:29.396 "io_queue_requests": 0, 00:20:29.396 "delay_cmd_submit": true, 00:20:29.396 "transport_retry_count": 4, 00:20:29.396 "bdev_retry_count": 3, 00:20:29.396 "transport_ack_timeout": 0, 00:20:29.396 "ctrlr_loss_timeout_sec": 0, 00:20:29.396 "reconnect_delay_sec": 0, 00:20:29.396 "fast_io_fail_timeout_sec": 0, 00:20:29.396 "disable_auto_failback": false, 00:20:29.396 "generate_uuids": false, 00:20:29.396 "transport_tos": 0, 00:20:29.396 "nvme_error_stat": false, 00:20:29.396 "rdma_srq_size": 0, 00:20:29.396 "io_path_stat": false, 00:20:29.396 "allow_accel_sequence": false, 00:20:29.396 "rdma_max_cq_size": 0, 00:20:29.396 "rdma_cm_event_timeout_ms": 0, 00:20:29.396 "dhchap_digests": [ 00:20:29.396 "sha256", 00:20:29.396 "sha384", 00:20:29.396 "sha512" 00:20:29.396 ], 00:20:29.396 "dhchap_dhgroups": [ 00:20:29.396 "null", 00:20:29.396 "ffdhe2048", 00:20:29.396 "ffdhe3072", 00:20:29.396 "ffdhe4096", 00:20:29.396 "ffdhe6144", 00:20:29.396 "ffdhe8192" 00:20:29.396 ] 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "bdev_nvme_set_hotplug", 00:20:29.396 "params": { 00:20:29.396 "period_us": 100000, 00:20:29.396 "enable": false 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "bdev_malloc_create", 00:20:29.396 "params": { 00:20:29.396 "name": "malloc0", 00:20:29.396 "num_blocks": 8192, 00:20:29.396 "block_size": 4096, 00:20:29.396 "physical_block_size": 4096, 00:20:29.396 "uuid": "248373a3-8dd1-471b-9fe3-6aa66ab6597d", 00:20:29.396 "optimal_io_boundary": 0, 00:20:29.396 "md_size": 0, 00:20:29.396 "dif_type": 0, 00:20:29.396 "dif_is_head_of_md": false, 00:20:29.396 "dif_pi_format": 0 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "bdev_wait_for_examine" 00:20:29.396 } 00:20:29.396 ] 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "subsystem": "nbd", 00:20:29.396 "config": [] 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "subsystem": "scheduler", 00:20:29.396 "config": [ 00:20:29.396 { 00:20:29.396 "method": "framework_set_scheduler", 00:20:29.396 "params": { 00:20:29.396 "name": "static" 00:20:29.396 } 00:20:29.396 } 00:20:29.396 ] 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "subsystem": "nvmf", 00:20:29.396 "config": [ 00:20:29.396 { 00:20:29.396 "method": "nvmf_set_config", 00:20:29.396 "params": { 00:20:29.396 "discovery_filter": "match_any", 00:20:29.396 "admin_cmd_passthru": { 00:20:29.396 "identify_ctrlr": false 00:20:29.396 }, 00:20:29.396 "dhchap_digests": [ 00:20:29.396 "sha256", 00:20:29.396 "sha384", 00:20:29.396 "sha512" 00:20:29.396 ], 00:20:29.396 "dhchap_dhgroups": [ 00:20:29.396 "null", 00:20:29.396 "ffdhe2048", 00:20:29.396 "ffdhe3072", 00:20:29.396 "ffdhe4096", 00:20:29.396 "ffdhe6144", 00:20:29.396 "ffdhe8192" 00:20:29.396 ] 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "nvmf_set_max_subsystems", 00:20:29.396 "params": { 00:20:29.396 "max_subsystems": 1024 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "nvmf_set_crdt", 00:20:29.396 "params": { 00:20:29.396 "crdt1": 0, 00:20:29.396 "crdt2": 0, 00:20:29.396 "crdt3": 0 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "nvmf_create_transport", 00:20:29.396 "params": { 00:20:29.396 "trtype": "TCP", 00:20:29.396 "max_queue_depth": 128, 00:20:29.396 "max_io_qpairs_per_ctrlr": 127, 00:20:29.396 "in_capsule_data_size": 4096, 00:20:29.396 "max_io_size": 131072, 00:20:29.396 "io_unit_size": 131072, 00:20:29.396 "max_aq_depth": 128, 00:20:29.396 "num_shared_buffers": 511, 00:20:29.396 "buf_cache_size": 4294967295, 00:20:29.396 "dif_insert_or_strip": false, 00:20:29.396 "zcopy": false, 00:20:29.396 "c2h_success": false, 00:20:29.396 "sock_priority": 0, 00:20:29.396 "abort_timeout_sec": 1, 00:20:29.396 "ack_timeout": 0, 00:20:29.396 "data_wr_pool_size": 0 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "nvmf_create_subsystem", 00:20:29.396 "params": { 00:20:29.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.396 "allow_any_host": false, 00:20:29.396 "serial_number": "00000000000000000000", 00:20:29.396 "model_number": "SPDK bdev Controller", 00:20:29.396 "max_namespaces": 32, 00:20:29.396 "min_cntlid": 1, 00:20:29.396 "max_cntlid": 65519, 00:20:29.396 "ana_reporting": false 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "nvmf_subsystem_add_host", 00:20:29.396 "params": { 00:20:29.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.396 "host": "nqn.2016-06.io.spdk:host1", 00:20:29.396 "psk": "key0" 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "nvmf_subsystem_add_ns", 00:20:29.396 "params": { 00:20:29.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.396 "namespace": { 00:20:29.396 "nsid": 1, 00:20:29.396 "bdev_name": "malloc0", 00:20:29.396 "nguid": "248373A38DD1471B9FE36AA66AB6597D", 00:20:29.396 "uuid": "248373a3-8dd1-471b-9fe3-6aa66ab6597d", 00:20:29.396 "no_auto_visible": false 00:20:29.396 } 00:20:29.396 } 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "method": "nvmf_subsystem_add_listener", 00:20:29.396 "params": { 00:20:29.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.396 "listen_address": { 00:20:29.396 "trtype": "TCP", 00:20:29.396 "adrfam": "IPv4", 00:20:29.396 "traddr": "10.0.0.2", 00:20:29.396 "trsvcid": "4420" 00:20:29.396 }, 00:20:29.396 "secure_channel": false, 00:20:29.396 "sock_impl": "ssl" 00:20:29.396 } 00:20:29.396 } 00:20:29.396 ] 00:20:29.396 } 00:20:29.396 ] 00:20:29.396 }' 00:20:29.396 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:29.396 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:29.396 "subsystems": [ 00:20:29.396 { 00:20:29.396 "subsystem": "keyring", 00:20:29.396 "config": [ 00:20:29.396 { 00:20:29.396 "method": "keyring_file_add_key", 00:20:29.396 "params": { 00:20:29.396 "name": "key0", 00:20:29.396 "path": "/tmp/tmp.xGKfsFPR5w" 00:20:29.396 } 00:20:29.396 } 00:20:29.396 ] 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "subsystem": "iobuf", 00:20:29.396 "config": [ 00:20:29.396 { 00:20:29.396 "method": "iobuf_set_options", 00:20:29.396 "params": { 00:20:29.396 "small_pool_count": 8192, 00:20:29.396 "large_pool_count": 1024, 00:20:29.396 "small_bufsize": 8192, 00:20:29.396 "large_bufsize": 135168 00:20:29.396 } 00:20:29.396 } 00:20:29.396 ] 00:20:29.396 }, 00:20:29.396 { 00:20:29.396 "subsystem": "sock", 00:20:29.396 "config": [ 00:20:29.396 { 00:20:29.396 "method": "sock_set_default_impl", 00:20:29.396 "params": { 00:20:29.397 "impl_name": "posix" 00:20:29.397 } 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "method": "sock_impl_set_options", 00:20:29.397 "params": { 00:20:29.397 "impl_name": "ssl", 00:20:29.397 "recv_buf_size": 4096, 00:20:29.397 "send_buf_size": 4096, 00:20:29.397 "enable_recv_pipe": true, 00:20:29.397 "enable_quickack": false, 00:20:29.397 "enable_placement_id": 0, 00:20:29.397 "enable_zerocopy_send_server": true, 00:20:29.397 "enable_zerocopy_send_client": false, 00:20:29.397 "zerocopy_threshold": 0, 00:20:29.397 "tls_version": 0, 00:20:29.397 "enable_ktls": false 00:20:29.397 } 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "method": "sock_impl_set_options", 00:20:29.397 "params": { 00:20:29.397 "impl_name": "posix", 00:20:29.397 "recv_buf_size": 2097152, 00:20:29.397 "send_buf_size": 2097152, 00:20:29.397 "enable_recv_pipe": true, 00:20:29.397 "enable_quickack": false, 00:20:29.397 "enable_placement_id": 0, 00:20:29.397 "enable_zerocopy_send_server": true, 00:20:29.397 "enable_zerocopy_send_client": false, 00:20:29.397 "zerocopy_threshold": 0, 00:20:29.397 "tls_version": 0, 00:20:29.397 "enable_ktls": false 00:20:29.397 } 00:20:29.397 } 00:20:29.397 ] 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "subsystem": "vmd", 00:20:29.397 "config": [] 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "subsystem": "accel", 00:20:29.397 "config": [ 00:20:29.397 { 00:20:29.397 "method": "accel_set_options", 00:20:29.397 "params": { 00:20:29.397 "small_cache_size": 128, 00:20:29.397 "large_cache_size": 16, 00:20:29.397 "task_count": 2048, 00:20:29.397 "sequence_count": 2048, 00:20:29.397 "buf_count": 2048 00:20:29.397 } 00:20:29.397 } 00:20:29.397 ] 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "subsystem": "bdev", 00:20:29.397 "config": [ 00:20:29.397 { 00:20:29.397 "method": "bdev_set_options", 00:20:29.397 "params": { 00:20:29.397 "bdev_io_pool_size": 65535, 00:20:29.397 "bdev_io_cache_size": 256, 00:20:29.397 "bdev_auto_examine": true, 00:20:29.397 "iobuf_small_cache_size": 128, 00:20:29.397 "iobuf_large_cache_size": 16 00:20:29.397 } 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "method": "bdev_raid_set_options", 00:20:29.397 "params": { 00:20:29.397 "process_window_size_kb": 1024, 00:20:29.397 "process_max_bandwidth_mb_sec": 0 00:20:29.397 } 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "method": "bdev_iscsi_set_options", 00:20:29.397 "params": { 00:20:29.397 "timeout_sec": 30 00:20:29.397 } 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "method": "bdev_nvme_set_options", 00:20:29.397 "params": { 00:20:29.397 "action_on_timeout": "none", 00:20:29.397 "timeout_us": 0, 00:20:29.397 "timeout_admin_us": 0, 00:20:29.397 "keep_alive_timeout_ms": 10000, 00:20:29.397 "arbitration_burst": 0, 00:20:29.397 "low_priority_weight": 0, 00:20:29.397 "medium_priority_weight": 0, 00:20:29.397 "high_priority_weight": 0, 00:20:29.397 "nvme_adminq_poll_period_us": 10000, 00:20:29.397 "nvme_ioq_poll_period_us": 0, 00:20:29.397 "io_queue_requests": 512, 00:20:29.397 "delay_cmd_submit": true, 00:20:29.397 "transport_retry_count": 4, 00:20:29.397 "bdev_retry_count": 3, 00:20:29.397 "transport_ack_timeout": 0, 00:20:29.397 "ctrlr_loss_timeout_sec": 0, 00:20:29.397 "reconnect_delay_sec": 0, 00:20:29.397 "fast_io_fail_timeout_sec": 0, 00:20:29.397 "disable_auto_failback": false, 00:20:29.397 "generate_uuids": false, 00:20:29.397 "transport_tos": 0, 00:20:29.397 "nvme_error_stat": false, 00:20:29.397 "rdma_srq_size": 0, 00:20:29.397 "io_path_stat": false, 00:20:29.397 "allow_accel_sequence": false, 00:20:29.397 "rdma_max_cq_size": 0, 00:20:29.397 "rdma_cm_event_timeout_ms": 0, 00:20:29.397 "dhchap_digests": [ 00:20:29.397 "sha256", 00:20:29.397 "sha384", 00:20:29.397 "sha512" 00:20:29.397 ], 00:20:29.397 "dhchap_dhgroups": [ 00:20:29.397 "null", 00:20:29.397 "ffdhe2048", 00:20:29.397 "ffdhe3072", 00:20:29.397 "ffdhe4096", 00:20:29.397 "ffdhe6144", 00:20:29.397 "ffdhe8192" 00:20:29.397 ] 00:20:29.397 } 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "method": "bdev_nvme_attach_controller", 00:20:29.397 "params": { 00:20:29.397 "name": "nvme0", 00:20:29.397 "trtype": "TCP", 00:20:29.397 "adrfam": "IPv4", 00:20:29.397 "traddr": "10.0.0.2", 00:20:29.397 "trsvcid": "4420", 00:20:29.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.397 "prchk_reftag": false, 00:20:29.397 "prchk_guard": false, 00:20:29.397 "ctrlr_loss_timeout_sec": 0, 00:20:29.397 "reconnect_delay_sec": 0, 00:20:29.397 "fast_io_fail_timeout_sec": 0, 00:20:29.397 "psk": "key0", 00:20:29.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.397 "hdgst": false, 00:20:29.397 "ddgst": false, 00:20:29.397 "multipath": "multipath" 00:20:29.397 } 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "method": "bdev_nvme_set_hotplug", 00:20:29.397 "params": { 00:20:29.397 "period_us": 100000, 00:20:29.397 "enable": false 00:20:29.397 } 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "method": "bdev_enable_histogram", 00:20:29.397 "params": { 00:20:29.397 "name": "nvme0n1", 00:20:29.397 "enable": true 00:20:29.397 } 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "method": "bdev_wait_for_examine" 00:20:29.397 } 00:20:29.397 ] 00:20:29.397 }, 00:20:29.397 { 00:20:29.397 "subsystem": "nbd", 00:20:29.397 "config": [] 00:20:29.397 } 00:20:29.397 ] 00:20:29.397 }' 00:20:29.397 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1044765 00:20:29.397 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1044765 ']' 00:20:29.397 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1044765 00:20:29.397 11:54:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:29.397 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:29.397 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1044765 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1044765' 00:20:29.658 killing process with pid 1044765 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1044765 00:20:29.658 Received shutdown signal, test time was about 1.000000 seconds 00:20:29.658 00:20:29.658 Latency(us) 00:20:29.658 [2024-10-11T09:54:14.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.658 [2024-10-11T09:54:14.290Z] =================================================================================================================== 00:20:29.658 [2024-10-11T09:54:14.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1044765 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1044635 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1044635 ']' 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1044635 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1044635 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1044635' 00:20:29.658 killing process with pid 1044635 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1044635 00:20:29.658 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1044635 00:20:29.918 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:29.918 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:29.918 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:29.918 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:29.919 "subsystems": [ 00:20:29.919 { 00:20:29.919 "subsystem": "keyring", 00:20:29.919 "config": [ 00:20:29.919 { 00:20:29.919 "method": "keyring_file_add_key", 00:20:29.919 "params": { 00:20:29.919 "name": "key0", 00:20:29.919 "path": "/tmp/tmp.xGKfsFPR5w" 00:20:29.919 } 00:20:29.919 } 00:20:29.919 ] 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "subsystem": "iobuf", 00:20:29.919 "config": [ 00:20:29.919 { 00:20:29.919 "method": "iobuf_set_options", 00:20:29.919 "params": { 00:20:29.919 "small_pool_count": 8192, 00:20:29.919 "large_pool_count": 1024, 00:20:29.919 "small_bufsize": 8192, 00:20:29.919 "large_bufsize": 135168 00:20:29.919 } 00:20:29.919 } 00:20:29.919 ] 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "subsystem": "sock", 00:20:29.919 "config": [ 00:20:29.919 { 00:20:29.919 "method": "sock_set_default_impl", 00:20:29.919 "params": { 00:20:29.919 "impl_name": "posix" 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "sock_impl_set_options", 00:20:29.919 "params": { 00:20:29.919 "impl_name": "ssl", 00:20:29.919 "recv_buf_size": 4096, 00:20:29.919 "send_buf_size": 4096, 00:20:29.919 "enable_recv_pipe": true, 00:20:29.919 "enable_quickack": false, 00:20:29.919 "enable_placement_id": 0, 00:20:29.919 "enable_zerocopy_send_server": true, 00:20:29.919 "enable_zerocopy_send_client": false, 00:20:29.919 "zerocopy_threshold": 0, 00:20:29.919 "tls_version": 0, 00:20:29.919 "enable_ktls": false 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "sock_impl_set_options", 00:20:29.919 "params": { 00:20:29.919 "impl_name": "posix", 00:20:29.919 "recv_buf_size": 2097152, 00:20:29.919 "send_buf_size": 2097152, 00:20:29.919 "enable_recv_pipe": true, 00:20:29.919 "enable_quickack": false, 00:20:29.919 "enable_placement_id": 0, 00:20:29.919 "enable_zerocopy_send_server": true, 00:20:29.919 "enable_zerocopy_send_client": false, 00:20:29.919 "zerocopy_threshold": 0, 00:20:29.919 "tls_version": 0, 00:20:29.919 "enable_ktls": false 00:20:29.919 } 00:20:29.919 } 00:20:29.919 ] 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "subsystem": "vmd", 00:20:29.919 "config": [] 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "subsystem": "accel", 00:20:29.919 "config": [ 00:20:29.919 { 00:20:29.919 "method": "accel_set_options", 00:20:29.919 "params": { 00:20:29.919 "small_cache_size": 128, 00:20:29.919 "large_cache_size": 16, 00:20:29.919 "task_count": 2048, 00:20:29.919 "sequence_count": 2048, 00:20:29.919 "buf_count": 2048 00:20:29.919 } 00:20:29.919 } 00:20:29.919 ] 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "subsystem": "bdev", 00:20:29.919 "config": [ 00:20:29.919 { 00:20:29.919 "method": "bdev_set_options", 00:20:29.919 "params": { 00:20:29.919 "bdev_io_pool_size": 65535, 00:20:29.919 "bdev_io_cache_size": 256, 00:20:29.919 "bdev_auto_examine": true, 00:20:29.919 "iobuf_small_cache_size": 128, 00:20:29.919 "iobuf_large_cache_size": 16 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "bdev_raid_set_options", 00:20:29.919 "params": { 00:20:29.919 "process_window_size_kb": 1024, 00:20:29.919 "process_max_bandwidth_mb_sec": 0 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "bdev_iscsi_set_options", 00:20:29.919 "params": { 00:20:29.919 "timeout_sec": 30 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "bdev_nvme_set_options", 00:20:29.919 "params": { 00:20:29.919 "action_on_timeout": "none", 00:20:29.919 "timeout_us": 0, 00:20:29.919 "timeout_admin_us": 0, 00:20:29.919 "keep_alive_timeout_ms": 10000, 00:20:29.919 "arbitration_burst": 0, 00:20:29.919 "low_priority_weight": 0, 00:20:29.919 "medium_priority_weight": 0, 00:20:29.919 "high_priority_weight": 0, 00:20:29.919 "nvme_adminq_poll_period_us": 10000, 00:20:29.919 "nvme_ioq_poll_period_us": 0, 00:20:29.919 "io_queue_requests": 0, 00:20:29.919 "delay_cmd_submit": true, 00:20:29.919 "transport_retry_count": 4, 00:20:29.919 "bdev_retry_count": 3, 00:20:29.919 "transport_ack_timeout": 0, 00:20:29.919 "ctrlr_loss_timeout_sec": 0, 00:20:29.919 "reconnect_delay_sec": 0, 00:20:29.919 "fast_io_fail_timeout_sec": 0, 00:20:29.919 "disable_auto_failback": false, 00:20:29.919 "generate_uuids": false, 00:20:29.919 "transport_tos": 0, 00:20:29.919 "nvme_error_stat": false, 00:20:29.919 "rdma_srq_size": 0, 00:20:29.919 "io_path_stat": false, 00:20:29.919 "allow_accel_sequence": false, 00:20:29.919 "rdma_max_cq_size": 0, 00:20:29.919 "rdma_cm_event_timeout_ms": 0, 00:20:29.919 "dhchap_digests": [ 00:20:29.919 "sha256", 00:20:29.919 "sha384", 00:20:29.919 "sha512" 00:20:29.919 ], 00:20:29.919 "dhchap_dhgroups": [ 00:20:29.919 "null", 00:20:29.919 "ffdhe2048", 00:20:29.919 "ffdhe3072", 00:20:29.919 "ffdhe4096", 00:20:29.919 "ffdhe6144", 00:20:29.919 "ffdhe8192" 00:20:29.919 ] 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "bdev_nvme_set_hotplug", 00:20:29.919 "params": { 00:20:29.919 "period_us": 100000, 00:20:29.919 "enable": false 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "bdev_malloc_create", 00:20:29.919 "params": { 00:20:29.919 "name": "malloc0", 00:20:29.919 "num_blocks": 8192, 00:20:29.919 "block_size": 4096, 00:20:29.919 "physical_block_size": 4096, 00:20:29.919 "uuid": "248373a3-8dd1-471b-9fe3-6aa66ab6597d", 00:20:29.919 "optimal_io_boundary": 0, 00:20:29.919 "md_size": 0, 00:20:29.919 "dif_type": 0, 00:20:29.919 "dif_is_head_of_md": false, 00:20:29.919 "dif_pi_format": 0 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "bdev_wait_for_examine" 00:20:29.919 } 00:20:29.919 ] 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "subsystem": "nbd", 00:20:29.919 "config": [] 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "subsystem": "scheduler", 00:20:29.919 "config": [ 00:20:29.919 { 00:20:29.919 "method": "framework_set_scheduler", 00:20:29.919 "params": { 00:20:29.919 "name": "static" 00:20:29.919 } 00:20:29.919 } 00:20:29.919 ] 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "subsystem": "nvmf", 00:20:29.919 "config": [ 00:20:29.919 { 00:20:29.919 "method": "nvmf_set_config", 00:20:29.919 "params": { 00:20:29.919 "discovery_filter": "match_any", 00:20:29.919 "admin_cmd_passthru": { 00:20:29.919 "identify_ctrlr": false 00:20:29.919 }, 00:20:29.919 "dhchap_digests": [ 00:20:29.919 "sha256", 00:20:29.919 "sha384", 00:20:29.919 "sha512" 00:20:29.919 ], 00:20:29.919 "dhchap_dhgroups": [ 00:20:29.919 "null", 00:20:29.919 "ffdhe2048", 00:20:29.919 "ffdhe3072", 00:20:29.919 "ffdhe4096", 00:20:29.919 "ffdhe6144", 00:20:29.919 "ffdhe8192" 00:20:29.919 ] 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "nvmf_set_max_subsystems", 00:20:29.919 "params": { 00:20:29.919 "max_subsystems": 1024 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "nvmf_set_crdt", 00:20:29.919 "params": { 00:20:29.919 "crdt1": 0, 00:20:29.919 "crdt2": 0, 00:20:29.919 "crdt3": 0 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "nvmf_create_transport", 00:20:29.919 "params": { 00:20:29.919 "trtype": "TCP", 00:20:29.919 "max_queue_depth": 128, 00:20:29.919 "max_io_qpairs_per_ctrlr": 127, 00:20:29.919 "in_capsule_data_size": 4096, 00:20:29.919 "max_io_size": 131072, 00:20:29.919 "io_unit_size": 131072, 00:20:29.919 "max_aq_depth": 128, 00:20:29.919 "num_shared_buffers": 511, 00:20:29.919 "buf_cache_size": 4294967295, 00:20:29.919 "dif_insert_or_strip": false, 00:20:29.919 "zcopy": false, 00:20:29.919 "c2h_success": false, 00:20:29.919 "sock_priority": 0, 00:20:29.919 "abort_timeout_sec": 1, 00:20:29.919 "ack_timeout": 0, 00:20:29.919 "data_wr_pool_size": 0 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "nvmf_create_subsystem", 00:20:29.919 "params": { 00:20:29.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.919 "allow_any_host": false, 00:20:29.919 "serial_number": "00000000000000000000", 00:20:29.919 "model_number": "SPDK bdev Controller", 00:20:29.919 "max_namespaces": 32, 00:20:29.919 "min_cntlid": 1, 00:20:29.919 "max_cntlid": 65519, 00:20:29.919 "ana_reporting": false 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "nvmf_subsystem_add_host", 00:20:29.919 "params": { 00:20:29.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.919 "host": "nqn.2016-06.io.spdk:host1", 00:20:29.919 "psk": "key0" 00:20:29.919 } 00:20:29.919 }, 00:20:29.919 { 00:20:29.919 "method": "nvmf_subsystem_add_ns", 00:20:29.919 "params": { 00:20:29.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.920 "namespace": { 00:20:29.920 "nsid": 1, 00:20:29.920 "bdev_name": "malloc0", 00:20:29.920 "nguid": "248373A38DD1471B9FE36AA66AB6597D", 00:20:29.920 "uuid": "248373a3-8dd1-471b-9fe3-6aa66ab6597d", 00:20:29.920 "no_auto_visible": false 00:20:29.920 } 00:20:29.920 } 00:20:29.920 }, 00:20:29.920 { 00:20:29.920 "method": "nvmf_subsystem_add_listener", 00:20:29.920 "params": { 00:20:29.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.920 "listen_address": { 00:20:29.920 "trtype": "TCP", 00:20:29.920 "adrfam": "IPv4", 00:20:29.920 "traddr": "10.0.0.2", 00:20:29.920 "trsvcid": "4420" 00:20:29.920 }, 00:20:29.920 "secure_channel": false, 00:20:29.920 "sock_impl": "ssl" 00:20:29.920 } 00:20:29.920 } 00:20:29.920 ] 00:20:29.920 } 00:20:29.920 ] 00:20:29.920 }' 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1045430 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1045430 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1045430 ']' 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.920 11:54:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.920 [2024-10-11 11:54:14.394899] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:29.920 [2024-10-11 11:54:14.394956] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.920 [2024-10-11 11:54:14.479157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.920 [2024-10-11 11:54:14.509241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.920 [2024-10-11 11:54:14.509269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.920 [2024-10-11 11:54:14.509274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.920 [2024-10-11 11:54:14.509279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.920 [2024-10-11 11:54:14.509284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.920 [2024-10-11 11:54:14.509741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.180 [2024-10-11 11:54:14.702454] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.180 [2024-10-11 11:54:14.734478] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.180 [2024-10-11 11:54:14.734683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1045463 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1045463 /var/tmp/bdevperf.sock 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1045463 ']' 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.751 11:54:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:30.751 "subsystems": [ 00:20:30.751 { 00:20:30.751 "subsystem": "keyring", 00:20:30.751 "config": [ 00:20:30.751 { 00:20:30.751 "method": "keyring_file_add_key", 00:20:30.751 "params": { 00:20:30.751 "name": "key0", 00:20:30.751 "path": "/tmp/tmp.xGKfsFPR5w" 00:20:30.751 } 00:20:30.751 } 00:20:30.751 ] 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "subsystem": "iobuf", 00:20:30.751 "config": [ 00:20:30.751 { 00:20:30.751 "method": "iobuf_set_options", 00:20:30.751 "params": { 00:20:30.751 "small_pool_count": 8192, 00:20:30.751 "large_pool_count": 1024, 00:20:30.751 "small_bufsize": 8192, 00:20:30.751 "large_bufsize": 135168 00:20:30.751 } 00:20:30.751 } 00:20:30.751 ] 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "subsystem": "sock", 00:20:30.751 "config": [ 00:20:30.751 { 00:20:30.751 "method": "sock_set_default_impl", 00:20:30.751 "params": { 00:20:30.751 "impl_name": "posix" 00:20:30.751 } 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "method": "sock_impl_set_options", 00:20:30.751 "params": { 00:20:30.751 "impl_name": "ssl", 00:20:30.751 "recv_buf_size": 4096, 00:20:30.751 "send_buf_size": 4096, 00:20:30.751 "enable_recv_pipe": true, 00:20:30.751 "enable_quickack": false, 00:20:30.751 "enable_placement_id": 0, 00:20:30.751 "enable_zerocopy_send_server": true, 00:20:30.751 "enable_zerocopy_send_client": false, 00:20:30.751 "zerocopy_threshold": 0, 00:20:30.751 "tls_version": 0, 00:20:30.751 "enable_ktls": false 00:20:30.751 } 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "method": "sock_impl_set_options", 00:20:30.751 "params": { 00:20:30.751 "impl_name": "posix", 00:20:30.751 "recv_buf_size": 2097152, 00:20:30.751 "send_buf_size": 2097152, 00:20:30.751 "enable_recv_pipe": true, 00:20:30.751 "enable_quickack": false, 00:20:30.751 "enable_placement_id": 0, 00:20:30.751 "enable_zerocopy_send_server": true, 00:20:30.751 "enable_zerocopy_send_client": false, 00:20:30.751 "zerocopy_threshold": 0, 00:20:30.751 "tls_version": 0, 00:20:30.751 "enable_ktls": false 00:20:30.751 } 00:20:30.751 } 00:20:30.751 ] 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "subsystem": "vmd", 00:20:30.751 "config": [] 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "subsystem": "accel", 00:20:30.751 "config": [ 00:20:30.751 { 00:20:30.751 "method": "accel_set_options", 00:20:30.751 "params": { 00:20:30.751 "small_cache_size": 128, 00:20:30.751 "large_cache_size": 16, 00:20:30.751 "task_count": 2048, 00:20:30.751 "sequence_count": 2048, 00:20:30.751 "buf_count": 2048 00:20:30.751 } 00:20:30.751 } 00:20:30.751 ] 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "subsystem": "bdev", 00:20:30.751 "config": [ 00:20:30.751 { 00:20:30.751 "method": "bdev_set_options", 00:20:30.751 "params": { 00:20:30.751 "bdev_io_pool_size": 65535, 00:20:30.751 "bdev_io_cache_size": 256, 00:20:30.751 "bdev_auto_examine": true, 00:20:30.751 "iobuf_small_cache_size": 128, 00:20:30.751 "iobuf_large_cache_size": 16 00:20:30.751 } 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "method": "bdev_raid_set_options", 00:20:30.751 "params": { 00:20:30.751 "process_window_size_kb": 1024, 00:20:30.751 "process_max_bandwidth_mb_sec": 0 00:20:30.751 } 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "method": "bdev_iscsi_set_options", 00:20:30.751 "params": { 00:20:30.751 "timeout_sec": 30 00:20:30.751 } 00:20:30.751 }, 00:20:30.751 { 00:20:30.751 "method": "bdev_nvme_set_options", 00:20:30.751 "params": { 00:20:30.751 "action_on_timeout": "none", 00:20:30.751 "timeout_us": 0, 00:20:30.751 "timeout_admin_us": 0, 00:20:30.751 "keep_alive_timeout_ms": 10000, 00:20:30.751 "arbitration_burst": 0, 00:20:30.751 "low_priority_weight": 0, 00:20:30.751 "medium_priority_weight": 0, 00:20:30.751 "high_priority_weight": 0, 00:20:30.751 "nvme_adminq_poll_period_us": 10000, 00:20:30.751 "nvme_ioq_poll_period_us": 0, 00:20:30.751 "io_queue_requests": 512, 00:20:30.751 "delay_cmd_submit": true, 00:20:30.751 "transport_retry_count": 4, 00:20:30.751 "bdev_retry_count": 3, 00:20:30.752 "transport_ack_timeout": 0, 00:20:30.752 "ctrlr_loss_timeout_sec": 0, 00:20:30.752 "reconnect_delay_sec": 0, 00:20:30.752 "fast_io_fail_timeout_sec": 0, 00:20:30.752 "disable_auto_failback": false, 00:20:30.752 "generate_uuids": false, 00:20:30.752 "transport_tos": 0, 00:20:30.752 "nvme_error_stat": false, 00:20:30.752 "rdma_srq_size": 0, 00:20:30.752 "io_path_stat": false, 00:20:30.752 "allow_accel_sequence": false, 00:20:30.752 "rdma_max_cq_size": 0, 00:20:30.752 "rdma_cm_event_timeout_ms": 0, 00:20:30.752 "dhchap_digests": [ 00:20:30.752 "sha256", 00:20:30.752 "sha384", 00:20:30.752 "sha512" 00:20:30.752 ], 00:20:30.752 "dhchap_dhgroups": [ 00:20:30.752 "null", 00:20:30.752 "ffdhe2048", 00:20:30.752 "ffdhe3072", 00:20:30.752 "ffdhe4096", 00:20:30.752 "ffdhe6144", 00:20:30.752 "ffdhe8192" 00:20:30.752 ] 00:20:30.752 } 00:20:30.752 }, 00:20:30.752 { 00:20:30.752 "method": "bdev_nvme_attach_controller", 00:20:30.752 "params": { 00:20:30.752 "name": "nvme0", 00:20:30.752 "trtype": "TCP", 00:20:30.752 "adrfam": "IPv4", 00:20:30.752 "traddr": "10.0.0.2", 00:20:30.752 "trsvcid": "4420", 00:20:30.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.752 "prchk_reftag": false, 00:20:30.752 "prchk_guard": false, 00:20:30.752 "ctrlr_loss_timeout_sec": 0, 00:20:30.752 "reconnect_delay_sec": 0, 00:20:30.752 "fast_io_fail_timeout_sec": 0, 00:20:30.752 "psk": "key0", 00:20:30.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.752 "hdgst": false, 00:20:30.752 "ddgst": false, 00:20:30.752 "multipath": "multipath" 00:20:30.752 } 00:20:30.752 }, 00:20:30.752 { 00:20:30.752 "method": "bdev_nvme_set_hotplug", 00:20:30.752 "params": { 00:20:30.752 "period_us": 100000, 00:20:30.752 "enable": false 00:20:30.752 } 00:20:30.752 }, 00:20:30.752 { 00:20:30.752 "method": "bdev_enable_histogram", 00:20:30.752 "params": { 00:20:30.752 "name": "nvme0n1", 00:20:30.752 "enable": true 00:20:30.752 } 00:20:30.752 }, 00:20:30.752 { 00:20:30.752 "method": "bdev_wait_for_examine" 00:20:30.752 } 00:20:30.752 ] 00:20:30.752 }, 00:20:30.752 { 00:20:30.752 "subsystem": "nbd", 00:20:30.752 "config": [] 00:20:30.752 } 00:20:30.752 ] 00:20:30.752 }' 00:20:30.752 [2024-10-11 11:54:15.281588] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:30.752 [2024-10-11 11:54:15.281657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045463 ] 00:20:30.752 [2024-10-11 11:54:15.357506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.013 [2024-10-11 11:54:15.387383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.013 [2024-10-11 11:54:15.522086] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.584 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.584 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:31.584 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:31.584 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:31.844 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.844 11:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.844 Running I/O for 1 seconds... 00:20:32.785 5421.00 IOPS, 21.18 MiB/s 00:20:32.785 Latency(us) 00:20:32.785 [2024-10-11T09:54:17.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.785 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:32.785 Verification LBA range: start 0x0 length 0x2000 00:20:32.785 nvme0n1 : 1.05 5294.08 20.68 0.00 0.00 23684.78 5242.88 46530.56 00:20:32.785 [2024-10-11T09:54:17.417Z] =================================================================================================================== 00:20:32.785 [2024-10-11T09:54:17.417Z] Total : 5294.08 20.68 0.00 0.00 23684.78 5242.88 46530.56 00:20:32.785 { 00:20:32.785 "results": [ 00:20:32.785 { 00:20:32.785 "job": "nvme0n1", 00:20:32.785 "core_mask": "0x2", 00:20:32.785 "workload": "verify", 00:20:32.785 "status": "finished", 00:20:32.785 "verify_range": { 00:20:32.785 "start": 0, 00:20:32.785 "length": 8192 00:20:32.785 }, 00:20:32.785 "queue_depth": 128, 00:20:32.785 "io_size": 4096, 00:20:32.785 "runtime": 1.04834, 00:20:32.785 "iops": 5294.083980388042, 00:20:32.785 "mibps": 20.680015548390788, 00:20:32.785 "io_failed": 0, 00:20:32.785 "io_timeout": 0, 00:20:32.785 "avg_latency_us": 23684.77682162162, 00:20:32.785 "min_latency_us": 5242.88, 00:20:32.785 "max_latency_us": 46530.56 00:20:32.785 } 00:20:32.785 ], 00:20:32.785 "core_count": 1 00:20:32.785 } 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:33.046 nvmf_trace.0 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1045463 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1045463 ']' 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1045463 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1045463 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1045463' 00:20:33.046 killing process with pid 1045463 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1045463 00:20:33.046 Received shutdown signal, test time was about 1.000000 seconds 00:20:33.046 00:20:33.046 Latency(us) 00:20:33.046 [2024-10-11T09:54:17.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.046 [2024-10-11T09:54:17.678Z] =================================================================================================================== 00:20:33.046 [2024-10-11T09:54:17.678Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.046 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1045463 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.306 rmmod nvme_tcp 00:20:33.306 rmmod nvme_fabrics 00:20:33.306 rmmod nvme_keyring 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1045430 ']' 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1045430 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1045430 ']' 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1045430 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1045430 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1045430' 00:20:33.306 killing process with pid 1045430 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1045430 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1045430 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:33.306 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.567 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.567 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.567 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.567 11:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.479 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.479 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Q2VgrEBoxQ /tmp/tmp.W2DoCNNWN5 /tmp/tmp.xGKfsFPR5w 00:20:35.479 00:20:35.479 real 1m26.048s 00:20:35.479 user 2m15.596s 00:20:35.479 sys 0m26.265s 00:20:35.479 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:35.479 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.479 ************************************ 00:20:35.479 END TEST nvmf_tls 00:20:35.479 ************************************ 00:20:35.479 11:54:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:35.479 11:54:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:35.479 11:54:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:35.479 11:54:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:35.479 ************************************ 00:20:35.479 START TEST nvmf_fips 00:20:35.479 ************************************ 00:20:35.479 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:35.740 * Looking for test storage... 00:20:35.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:35.740 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:35.740 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:35.740 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:35.740 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:35.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.741 --rc genhtml_branch_coverage=1 00:20:35.741 --rc genhtml_function_coverage=1 00:20:35.741 --rc genhtml_legend=1 00:20:35.741 --rc geninfo_all_blocks=1 00:20:35.741 --rc geninfo_unexecuted_blocks=1 00:20:35.741 00:20:35.741 ' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:35.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.741 --rc genhtml_branch_coverage=1 00:20:35.741 --rc genhtml_function_coverage=1 00:20:35.741 --rc genhtml_legend=1 00:20:35.741 --rc geninfo_all_blocks=1 00:20:35.741 --rc geninfo_unexecuted_blocks=1 00:20:35.741 00:20:35.741 ' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:35.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.741 --rc genhtml_branch_coverage=1 00:20:35.741 --rc genhtml_function_coverage=1 00:20:35.741 --rc genhtml_legend=1 00:20:35.741 --rc geninfo_all_blocks=1 00:20:35.741 --rc geninfo_unexecuted_blocks=1 00:20:35.741 00:20:35.741 ' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:35.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.741 --rc genhtml_branch_coverage=1 00:20:35.741 --rc genhtml_function_coverage=1 00:20:35.741 --rc genhtml_legend=1 00:20:35.741 --rc geninfo_all_blocks=1 00:20:35.741 --rc geninfo_unexecuted_blocks=1 00:20:35.741 00:20:35.741 ' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:35.741 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:35.742 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:36.003 Error setting digest 00:20:36.003 4032539F047F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:36.003 4032539F047F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.003 11:54:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:44.145 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:44.145 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:44.145 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:44.145 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:44.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:20:44.146 00:20:44.146 --- 10.0.0.2 ping statistics --- 00:20:44.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.146 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:20:44.146 00:20:44.146 --- 10.0.0.1 ping statistics --- 00:20:44.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.146 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1050223 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1050223 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1050223 ']' 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.146 11:54:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.146 [2024-10-11 11:54:28.060618] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:44.146 [2024-10-11 11:54:28.060700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.146 [2024-10-11 11:54:28.148286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.146 [2024-10-11 11:54:28.198537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.146 [2024-10-11 11:54:28.198592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.146 [2024-10-11 11:54:28.198600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.146 [2024-10-11 11:54:28.198607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.146 [2024-10-11 11:54:28.198613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.146 [2024-10-11 11:54:28.199355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.7QP 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.7QP 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.7QP 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.7QP 00:20:44.407 11:54:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:44.668 [2024-10-11 11:54:29.077395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.668 [2024-10-11 11:54:29.093383] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.668 [2024-10-11 11:54:29.093634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.668 malloc0 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1050528 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1050528 /var/tmp/bdevperf.sock 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1050528 ']' 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.668 11:54:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:44.668 [2024-10-11 11:54:29.239389] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:20:44.668 [2024-10-11 11:54:29.239462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050528 ] 00:20:44.929 [2024-10-11 11:54:29.321361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.929 [2024-10-11 11:54:29.372131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.501 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.501 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:45.501 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.7QP 00:20:45.761 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:46.020 [2024-10-11 11:54:30.417345] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.020 TLSTESTn1 00:20:46.020 11:54:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:46.020 Running I/O for 10 seconds... 00:20:48.345 5008.00 IOPS, 19.56 MiB/s [2024-10-11T09:54:33.919Z] 4942.50 IOPS, 19.31 MiB/s [2024-10-11T09:54:34.859Z] 5494.33 IOPS, 21.46 MiB/s [2024-10-11T09:54:35.802Z] 5674.25 IOPS, 22.17 MiB/s [2024-10-11T09:54:36.746Z] 5630.60 IOPS, 21.99 MiB/s [2024-10-11T09:54:37.688Z] 5500.17 IOPS, 21.49 MiB/s [2024-10-11T09:54:38.630Z] 5582.71 IOPS, 21.81 MiB/s [2024-10-11T09:54:40.015Z] 5680.25 IOPS, 22.19 MiB/s [2024-10-11T09:54:40.960Z] 5611.89 IOPS, 21.92 MiB/s [2024-10-11T09:54:40.960Z] 5464.50 IOPS, 21.35 MiB/s 00:20:56.328 Latency(us) 00:20:56.328 [2024-10-11T09:54:40.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.328 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:56.328 Verification LBA range: start 0x0 length 0x2000 00:20:56.328 TLSTESTn1 : 10.01 5471.01 21.37 0.00 0.00 23363.41 4314.45 41943.04 00:20:56.328 [2024-10-11T09:54:40.960Z] =================================================================================================================== 00:20:56.328 [2024-10-11T09:54:40.960Z] Total : 5471.01 21.37 0.00 0.00 23363.41 4314.45 41943.04 00:20:56.328 { 00:20:56.328 "results": [ 00:20:56.328 { 00:20:56.328 "job": "TLSTESTn1", 00:20:56.328 "core_mask": "0x4", 00:20:56.328 "workload": "verify", 00:20:56.328 "status": "finished", 00:20:56.328 "verify_range": { 00:20:56.328 "start": 0, 00:20:56.328 "length": 8192 00:20:56.328 }, 00:20:56.328 "queue_depth": 128, 00:20:56.328 "io_size": 4096, 00:20:56.328 "runtime": 10.011319, 00:20:56.328 "iops": 5471.0073667615625, 00:20:56.328 "mibps": 21.371122526412353, 00:20:56.328 "io_failed": 0, 00:20:56.328 "io_timeout": 0, 00:20:56.328 "avg_latency_us": 23363.41189999757, 00:20:56.328 "min_latency_us": 4314.453333333333, 00:20:56.328 "max_latency_us": 41943.04 00:20:56.328 } 00:20:56.328 ], 00:20:56.328 "core_count": 1 00:20:56.328 } 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:56.328 nvmf_trace.0 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1050528 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1050528 ']' 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1050528 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050528 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050528' 00:20:56.328 killing process with pid 1050528 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1050528 00:20:56.328 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.328 00:20:56.328 Latency(us) 00:20:56.328 [2024-10-11T09:54:40.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.328 [2024-10-11T09:54:40.960Z] =================================================================================================================== 00:20:56.328 [2024-10-11T09:54:40.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1050528 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:56.328 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:56.328 rmmod nvme_tcp 00:20:56.589 rmmod nvme_fabrics 00:20:56.589 rmmod nvme_keyring 00:20:56.589 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:56.589 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:56.589 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:56.589 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1050223 ']' 00:20:56.589 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1050223 00:20:56.589 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1050223 ']' 00:20:56.589 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1050223 00:20:56.589 11:54:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1050223 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1050223' 00:20:56.589 killing process with pid 1050223 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1050223 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1050223 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.589 11:54:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.194 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:59.194 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.7QP 00:20:59.194 00:20:59.194 real 0m23.158s 00:20:59.194 user 0m24.963s 00:20:59.194 sys 0m9.582s 00:20:59.194 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:59.194 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:59.194 ************************************ 00:20:59.194 END TEST nvmf_fips 00:20:59.194 ************************************ 00:20:59.194 11:54:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:59.194 11:54:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:59.194 11:54:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:59.194 11:54:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:59.194 ************************************ 00:20:59.194 START TEST nvmf_control_msg_list 00:20:59.194 ************************************ 00:20:59.194 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:59.194 * Looking for test storage... 00:20:59.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:59.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.195 --rc genhtml_branch_coverage=1 00:20:59.195 --rc genhtml_function_coverage=1 00:20:59.195 --rc genhtml_legend=1 00:20:59.195 --rc geninfo_all_blocks=1 00:20:59.195 --rc geninfo_unexecuted_blocks=1 00:20:59.195 00:20:59.195 ' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:59.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.195 --rc genhtml_branch_coverage=1 00:20:59.195 --rc genhtml_function_coverage=1 00:20:59.195 --rc genhtml_legend=1 00:20:59.195 --rc geninfo_all_blocks=1 00:20:59.195 --rc geninfo_unexecuted_blocks=1 00:20:59.195 00:20:59.195 ' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:59.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.195 --rc genhtml_branch_coverage=1 00:20:59.195 --rc genhtml_function_coverage=1 00:20:59.195 --rc genhtml_legend=1 00:20:59.195 --rc geninfo_all_blocks=1 00:20:59.195 --rc geninfo_unexecuted_blocks=1 00:20:59.195 00:20:59.195 ' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:59.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.195 --rc genhtml_branch_coverage=1 00:20:59.195 --rc genhtml_function_coverage=1 00:20:59.195 --rc genhtml_legend=1 00:20:59.195 --rc geninfo_all_blocks=1 00:20:59.195 --rc geninfo_unexecuted_blocks=1 00:20:59.195 00:20:59.195 ' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:59.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.195 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:59.196 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:59.196 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:59.196 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.196 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.196 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.196 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:59.196 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:59.196 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:59.196 11:54:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:07.440 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:07.440 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:07.440 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:07.440 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:07.440 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:07.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:21:07.441 00:21:07.441 --- 10.0.0.2 ping statistics --- 00:21:07.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.441 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:21:07.441 00:21:07.441 --- 10.0.0.1 ping statistics --- 00:21:07.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.441 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1056968 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1056968 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1056968 ']' 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.441 11:54:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.441 [2024-10-11 11:54:51.052263] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:07.441 [2024-10-11 11:54:51.052329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.441 [2024-10-11 11:54:51.141128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.441 [2024-10-11 11:54:51.191477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.441 [2024-10-11 11:54:51.191536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.441 [2024-10-11 11:54:51.191546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.441 [2024-10-11 11:54:51.191554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.441 [2024-10-11 11:54:51.191560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.441 [2024-10-11 11:54:51.192339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.441 [2024-10-11 11:54:51.918312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.441 Malloc0 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:07.441 [2024-10-11 11:54:51.972724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1057230 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1057231 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1057232 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1057230 00:21:07.441 11:54:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:07.441 [2024-10-11 11:54:52.063619] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:07.441 [2024-10-11 11:54:52.064023] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:07.442 [2024-10-11 11:54:52.064354] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:08.827 Initializing NVMe Controllers 00:21:08.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:08.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:08.827 Initialization complete. Launching workers. 00:21:08.827 ======================================================== 00:21:08.827 Latency(us) 00:21:08.827 Device Information : IOPS MiB/s Average min max 00:21:08.827 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 137.00 0.54 7438.46 469.68 41102.64 00:21:08.827 ======================================================== 00:21:08.827 Total : 137.00 0.54 7438.46 469.68 41102.64 00:21:08.827 00:21:08.827 Initializing NVMe Controllers 00:21:08.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:08.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:08.827 Initialization complete. Launching workers. 00:21:08.827 ======================================================== 00:21:08.827 Latency(us) 00:21:08.827 Device Information : IOPS MiB/s Average min max 00:21:08.827 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 141.00 0.55 7093.34 214.19 41129.27 00:21:08.827 ======================================================== 00:21:08.827 Total : 141.00 0.55 7093.34 214.19 41129.27 00:21:08.827 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1057231 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1057232 00:21:08.827 Initializing NVMe Controllers 00:21:08.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:08.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:08.827 Initialization complete. Launching workers. 00:21:08.827 ======================================================== 00:21:08.827 Latency(us) 00:21:08.827 Device Information : IOPS MiB/s Average min max 00:21:08.827 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1493.98 5.84 669.26 313.14 783.33 00:21:08.827 ======================================================== 00:21:08.827 Total : 1493.98 5.84 669.26 313.14 783.33 00:21:08.827 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.827 rmmod nvme_tcp 00:21:08.827 rmmod nvme_fabrics 00:21:08.827 rmmod nvme_keyring 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1056968 ']' 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1056968 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1056968 ']' 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1056968 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1056968 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:08.827 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:08.828 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1056968' 00:21:08.828 killing process with pid 1056968 00:21:08.828 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1056968 00:21:08.828 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1056968 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.088 11:54:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.000 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:11.000 00:21:11.000 real 0m12.282s 00:21:11.000 user 0m7.852s 00:21:11.000 sys 0m6.430s 00:21:11.000 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:11.000 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:11.000 ************************************ 00:21:11.000 END TEST nvmf_control_msg_list 00:21:11.000 ************************************ 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.261 ************************************ 00:21:11.261 START TEST nvmf_wait_for_buf 00:21:11.261 ************************************ 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:11.261 * Looking for test storage... 00:21:11.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.261 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:11.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.524 --rc genhtml_branch_coverage=1 00:21:11.524 --rc genhtml_function_coverage=1 00:21:11.524 --rc genhtml_legend=1 00:21:11.524 --rc geninfo_all_blocks=1 00:21:11.524 --rc geninfo_unexecuted_blocks=1 00:21:11.524 00:21:11.524 ' 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:11.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.524 --rc genhtml_branch_coverage=1 00:21:11.524 --rc genhtml_function_coverage=1 00:21:11.524 --rc genhtml_legend=1 00:21:11.524 --rc geninfo_all_blocks=1 00:21:11.524 --rc geninfo_unexecuted_blocks=1 00:21:11.524 00:21:11.524 ' 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:11.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.524 --rc genhtml_branch_coverage=1 00:21:11.524 --rc genhtml_function_coverage=1 00:21:11.524 --rc genhtml_legend=1 00:21:11.524 --rc geninfo_all_blocks=1 00:21:11.524 --rc geninfo_unexecuted_blocks=1 00:21:11.524 00:21:11.524 ' 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:11.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.524 --rc genhtml_branch_coverage=1 00:21:11.524 --rc genhtml_function_coverage=1 00:21:11.524 --rc genhtml_legend=1 00:21:11.524 --rc geninfo_all_blocks=1 00:21:11.524 --rc geninfo_unexecuted_blocks=1 00:21:11.524 00:21:11.524 ' 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.524 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.525 11:54:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.664 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:19.665 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:19.665 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:19.665 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:19.665 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:21:19.665 00:21:19.665 --- 10.0.0.2 ping statistics --- 00:21:19.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.665 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:21:19.665 00:21:19.665 --- 10.0.0.1 ping statistics --- 00:21:19.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.665 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1061588 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1061588 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1061588 ']' 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.665 11:55:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.666 [2024-10-11 11:55:03.526512] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:19.666 [2024-10-11 11:55:03.526576] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.666 [2024-10-11 11:55:03.614666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.666 [2024-10-11 11:55:03.666903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.666 [2024-10-11 11:55:03.666955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.666 [2024-10-11 11:55:03.666963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.666 [2024-10-11 11:55:03.666970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.666 [2024-10-11 11:55:03.666976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.666 [2024-10-11 11:55:03.667736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 Malloc0 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 [2024-10-11 11:55:04.503844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:19.928 [2024-10-11 11:55:04.540159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.928 11:55:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:20.188 [2024-10-11 11:55:04.635769] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:21.573 Initializing NVMe Controllers 00:21:21.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:21.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:21.573 Initialization complete. Launching workers. 00:21:21.573 ======================================================== 00:21:21.573 Latency(us) 00:21:21.573 Device Information : IOPS MiB/s Average min max 00:21:21.573 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 26.00 3.25 160838.33 47870.24 191553.84 00:21:21.573 ======================================================== 00:21:21.573 Total : 26.00 3.25 160838.33 47870.24 191553.84 00:21:21.573 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=390 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 390 -eq 0 ]] 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.573 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.573 rmmod nvme_tcp 00:21:21.573 rmmod nvme_fabrics 00:21:21.833 rmmod nvme_keyring 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1061588 ']' 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1061588 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1061588 ']' 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1061588 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1061588 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1061588' 00:21:21.833 killing process with pid 1061588 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1061588 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1061588 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:21.833 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:22.094 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:22.094 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:22.094 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:22.094 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:22.094 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:22.094 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.094 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.094 11:55:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.003 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:24.003 00:21:24.003 real 0m12.847s 00:21:24.003 user 0m5.204s 00:21:24.003 sys 0m6.239s 00:21:24.003 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:24.003 11:55:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.003 ************************************ 00:21:24.003 END TEST nvmf_wait_for_buf 00:21:24.003 ************************************ 00:21:24.003 11:55:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:24.003 11:55:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:24.004 11:55:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:24.004 11:55:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:24.004 11:55:08 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.004 11:55:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:32.146 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:32.146 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:32.146 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:32.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:32.146 ************************************ 00:21:32.146 START TEST nvmf_perf_adq 00:21:32.146 ************************************ 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:32.146 * Looking for test storage... 00:21:32.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:32.146 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.147 --rc genhtml_branch_coverage=1 00:21:32.147 --rc genhtml_function_coverage=1 00:21:32.147 --rc genhtml_legend=1 00:21:32.147 --rc geninfo_all_blocks=1 00:21:32.147 --rc geninfo_unexecuted_blocks=1 00:21:32.147 00:21:32.147 ' 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.147 --rc genhtml_branch_coverage=1 00:21:32.147 --rc genhtml_function_coverage=1 00:21:32.147 --rc genhtml_legend=1 00:21:32.147 --rc geninfo_all_blocks=1 00:21:32.147 --rc geninfo_unexecuted_blocks=1 00:21:32.147 00:21:32.147 ' 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.147 --rc genhtml_branch_coverage=1 00:21:32.147 --rc genhtml_function_coverage=1 00:21:32.147 --rc genhtml_legend=1 00:21:32.147 --rc geninfo_all_blocks=1 00:21:32.147 --rc geninfo_unexecuted_blocks=1 00:21:32.147 00:21:32.147 ' 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:32.147 --rc genhtml_branch_coverage=1 00:21:32.147 --rc genhtml_function_coverage=1 00:21:32.147 --rc genhtml_legend=1 00:21:32.147 --rc geninfo_all_blocks=1 00:21:32.147 --rc geninfo_unexecuted_blocks=1 00:21:32.147 00:21:32.147 ' 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:32.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:32.147 11:55:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.735 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.735 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.735 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.735 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.735 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.735 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.735 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.735 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.735 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:38.736 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:38.736 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:38.736 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:38.736 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:38.736 11:55:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:40.648 11:55:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:42.035 11:55:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:47.327 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:47.327 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:47.327 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:47.327 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:47.327 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:47.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:21:47.328 00:21:47.328 --- 10.0.0.2 ping statistics --- 00:21:47.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.328 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:21:47.328 00:21:47.328 --- 10.0.0.1 ping statistics --- 00:21:47.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.328 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.328 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:47.589 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:47.589 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.589 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:47.589 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:47.589 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:47.589 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:47.589 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:47.589 11:55:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.589 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1071812 00:21:47.589 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1071812 00:21:47.589 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:47.589 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1071812 ']' 00:21:47.589 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.589 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.589 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.589 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.589 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.589 [2024-10-11 11:55:32.058443] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:21:47.589 [2024-10-11 11:55:32.058507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.589 [2024-10-11 11:55:32.148487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.589 [2024-10-11 11:55:32.203100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.589 [2024-10-11 11:55:32.203153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.589 [2024-10-11 11:55:32.203162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.589 [2024-10-11 11:55:32.203170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.589 [2024-10-11 11:55:32.203176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.589 [2024-10-11 11:55:32.205518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.589 [2024-10-11 11:55:32.205700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.589 [2024-10-11 11:55:32.205812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.589 [2024-10-11 11:55:32.206001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.531 11:55:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.532 [2024-10-11 11:55:33.089930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.532 Malloc1 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.532 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.793 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.793 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.793 [2024-10-11 11:55:33.169777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.793 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.793 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1072163 00:21:48.793 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:48.793 11:55:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:50.710 "tick_rate": 2400000000, 00:21:50.710 "poll_groups": [ 00:21:50.710 { 00:21:50.710 "name": "nvmf_tgt_poll_group_000", 00:21:50.710 "admin_qpairs": 1, 00:21:50.710 "io_qpairs": 1, 00:21:50.710 "current_admin_qpairs": 1, 00:21:50.710 "current_io_qpairs": 1, 00:21:50.710 "pending_bdev_io": 0, 00:21:50.710 "completed_nvme_io": 16488, 00:21:50.710 "transports": [ 00:21:50.710 { 00:21:50.710 "trtype": "TCP" 00:21:50.710 } 00:21:50.710 ] 00:21:50.710 }, 00:21:50.710 { 00:21:50.710 "name": "nvmf_tgt_poll_group_001", 00:21:50.710 "admin_qpairs": 0, 00:21:50.710 "io_qpairs": 1, 00:21:50.710 "current_admin_qpairs": 0, 00:21:50.710 "current_io_qpairs": 1, 00:21:50.710 "pending_bdev_io": 0, 00:21:50.710 "completed_nvme_io": 16788, 00:21:50.710 "transports": [ 00:21:50.710 { 00:21:50.710 "trtype": "TCP" 00:21:50.710 } 00:21:50.710 ] 00:21:50.710 }, 00:21:50.710 { 00:21:50.710 "name": "nvmf_tgt_poll_group_002", 00:21:50.710 "admin_qpairs": 0, 00:21:50.710 "io_qpairs": 1, 00:21:50.710 "current_admin_qpairs": 0, 00:21:50.710 "current_io_qpairs": 1, 00:21:50.710 "pending_bdev_io": 0, 00:21:50.710 "completed_nvme_io": 18718, 00:21:50.710 "transports": [ 00:21:50.710 { 00:21:50.710 "trtype": "TCP" 00:21:50.710 } 00:21:50.710 ] 00:21:50.710 }, 00:21:50.710 { 00:21:50.710 "name": "nvmf_tgt_poll_group_003", 00:21:50.710 "admin_qpairs": 0, 00:21:50.710 "io_qpairs": 1, 00:21:50.710 "current_admin_qpairs": 0, 00:21:50.710 "current_io_qpairs": 1, 00:21:50.710 "pending_bdev_io": 0, 00:21:50.710 "completed_nvme_io": 17152, 00:21:50.710 "transports": [ 00:21:50.710 { 00:21:50.710 "trtype": "TCP" 00:21:50.710 } 00:21:50.710 ] 00:21:50.710 } 00:21:50.710 ] 00:21:50.710 }' 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:50.710 11:55:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1072163 00:21:58.847 Initializing NVMe Controllers 00:21:58.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:58.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:58.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:58.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:58.847 Initialization complete. Launching workers. 00:21:58.847 ======================================================== 00:21:58.847 Latency(us) 00:21:58.847 Device Information : IOPS MiB/s Average min max 00:21:58.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12932.60 50.52 4948.92 1470.55 11754.38 00:21:58.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13180.30 51.49 4854.98 1217.70 15448.29 00:21:58.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13789.80 53.87 4641.51 1262.39 15045.69 00:21:58.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13221.60 51.65 4839.91 1169.02 13133.92 00:21:58.847 ======================================================== 00:21:58.847 Total : 53124.30 207.52 4818.68 1169.02 15448.29 00:21:58.847 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.847 rmmod nvme_tcp 00:21:58.847 rmmod nvme_fabrics 00:21:58.847 rmmod nvme_keyring 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1071812 ']' 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1071812 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1071812 ']' 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1071812 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.847 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1071812 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1071812' 00:21:59.109 killing process with pid 1071812 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1071812 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1071812 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.109 11:55:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.655 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.655 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:01.655 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:01.655 11:55:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:02.597 11:55:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:04.537 11:55:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.830 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:09.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:09.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:09.831 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:09.831 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:09.831 11:55:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:22:09.831 00:22:09.831 --- 10.0.0.2 ping statistics --- 00:22:09.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.831 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:22:09.831 00:22:09.831 --- 10.0.0.1 ping statistics --- 00:22:09.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.831 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:09.831 net.core.busy_poll = 1 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:09.831 net.core.busy_read = 1 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:09.831 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:10.093 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:10.093 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:10.093 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1076637 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1076637 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1076637 ']' 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.094 11:55:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:10.094 [2024-10-11 11:55:54.659843] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:10.094 [2024-10-11 11:55:54.659909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.355 [2024-10-11 11:55:54.755371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.355 [2024-10-11 11:55:54.807789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.355 [2024-10-11 11:55:54.807844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.355 [2024-10-11 11:55:54.807853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.355 [2024-10-11 11:55:54.807860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.355 [2024-10-11 11:55:54.807866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.355 [2024-10-11 11:55:54.809828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.355 [2024-10-11 11:55:54.809987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.355 [2024-10-11 11:55:54.810148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.355 [2024-10-11 11:55:54.810148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.928 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.928 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:10.928 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:10.928 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.928 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.189 [2024-10-11 11:55:55.714547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.189 Malloc1 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:11.189 [2024-10-11 11:55:55.798823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.189 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1076991 00:22:11.190 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:11.190 11:55:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:13.738 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:13.738 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.738 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.738 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.738 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:13.738 "tick_rate": 2400000000, 00:22:13.738 "poll_groups": [ 00:22:13.738 { 00:22:13.738 "name": "nvmf_tgt_poll_group_000", 00:22:13.738 "admin_qpairs": 1, 00:22:13.738 "io_qpairs": 1, 00:22:13.738 "current_admin_qpairs": 1, 00:22:13.738 "current_io_qpairs": 1, 00:22:13.738 "pending_bdev_io": 0, 00:22:13.738 "completed_nvme_io": 27798, 00:22:13.738 "transports": [ 00:22:13.738 { 00:22:13.738 "trtype": "TCP" 00:22:13.738 } 00:22:13.738 ] 00:22:13.738 }, 00:22:13.738 { 00:22:13.738 "name": "nvmf_tgt_poll_group_001", 00:22:13.738 "admin_qpairs": 0, 00:22:13.738 "io_qpairs": 3, 00:22:13.738 "current_admin_qpairs": 0, 00:22:13.738 "current_io_qpairs": 3, 00:22:13.738 "pending_bdev_io": 0, 00:22:13.738 "completed_nvme_io": 32020, 00:22:13.738 "transports": [ 00:22:13.738 { 00:22:13.738 "trtype": "TCP" 00:22:13.738 } 00:22:13.738 ] 00:22:13.738 }, 00:22:13.738 { 00:22:13.738 "name": "nvmf_tgt_poll_group_002", 00:22:13.738 "admin_qpairs": 0, 00:22:13.738 "io_qpairs": 0, 00:22:13.738 "current_admin_qpairs": 0, 00:22:13.738 "current_io_qpairs": 0, 00:22:13.738 "pending_bdev_io": 0, 00:22:13.739 "completed_nvme_io": 0, 00:22:13.739 "transports": [ 00:22:13.739 { 00:22:13.739 "trtype": "TCP" 00:22:13.739 } 00:22:13.739 ] 00:22:13.739 }, 00:22:13.739 { 00:22:13.739 "name": "nvmf_tgt_poll_group_003", 00:22:13.739 "admin_qpairs": 0, 00:22:13.739 "io_qpairs": 0, 00:22:13.739 "current_admin_qpairs": 0, 00:22:13.739 "current_io_qpairs": 0, 00:22:13.739 "pending_bdev_io": 0, 00:22:13.739 "completed_nvme_io": 0, 00:22:13.739 "transports": [ 00:22:13.739 { 00:22:13.739 "trtype": "TCP" 00:22:13.739 } 00:22:13.739 ] 00:22:13.739 } 00:22:13.739 ] 00:22:13.739 }' 00:22:13.739 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:13.739 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:13.739 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:13.739 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:13.739 11:55:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1076991 00:22:21.880 Initializing NVMe Controllers 00:22:21.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:21.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:21.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:21.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:21.880 Initialization complete. Launching workers. 00:22:21.880 ======================================================== 00:22:21.880 Latency(us) 00:22:21.880 Device Information : IOPS MiB/s Average min max 00:22:21.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6715.60 26.23 9544.11 1475.12 56777.87 00:22:21.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6872.60 26.85 9311.76 1311.36 57651.24 00:22:21.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7240.00 28.28 8857.88 1158.86 60101.23 00:22:21.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 17932.09 70.05 3568.82 970.08 45574.38 00:22:21.880 ======================================================== 00:22:21.880 Total : 38760.29 151.41 6610.32 970.08 60101.23 00:22:21.880 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.880 rmmod nvme_tcp 00:22:21.880 rmmod nvme_fabrics 00:22:21.880 rmmod nvme_keyring 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1076637 ']' 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1076637 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1076637 ']' 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1076637 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1076637 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1076637' 00:22:21.880 killing process with pid 1076637 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1076637 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1076637 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.880 11:56:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.181 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.181 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:25.181 00:22:25.181 real 0m53.592s 00:22:25.181 user 2m50.103s 00:22:25.181 sys 0m11.592s 00:22:25.181 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.182 ************************************ 00:22:25.182 END TEST nvmf_perf_adq 00:22:25.182 ************************************ 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:25.182 ************************************ 00:22:25.182 START TEST nvmf_shutdown 00:22:25.182 ************************************ 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:25.182 * Looking for test storage... 00:22:25.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.182 --rc genhtml_branch_coverage=1 00:22:25.182 --rc genhtml_function_coverage=1 00:22:25.182 --rc genhtml_legend=1 00:22:25.182 --rc geninfo_all_blocks=1 00:22:25.182 --rc geninfo_unexecuted_blocks=1 00:22:25.182 00:22:25.182 ' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.182 --rc genhtml_branch_coverage=1 00:22:25.182 --rc genhtml_function_coverage=1 00:22:25.182 --rc genhtml_legend=1 00:22:25.182 --rc geninfo_all_blocks=1 00:22:25.182 --rc geninfo_unexecuted_blocks=1 00:22:25.182 00:22:25.182 ' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.182 --rc genhtml_branch_coverage=1 00:22:25.182 --rc genhtml_function_coverage=1 00:22:25.182 --rc genhtml_legend=1 00:22:25.182 --rc geninfo_all_blocks=1 00:22:25.182 --rc geninfo_unexecuted_blocks=1 00:22:25.182 00:22:25.182 ' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:25.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.182 --rc genhtml_branch_coverage=1 00:22:25.182 --rc genhtml_function_coverage=1 00:22:25.182 --rc genhtml_legend=1 00:22:25.182 --rc geninfo_all_blocks=1 00:22:25.182 --rc geninfo_unexecuted_blocks=1 00:22:25.182 00:22:25.182 ' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.182 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:25.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:25.183 ************************************ 00:22:25.183 START TEST nvmf_shutdown_tc1 00:22:25.183 ************************************ 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.183 11:56:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:33.322 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:33.322 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:33.322 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:33.322 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.322 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:33.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:22:33.323 00:22:33.323 --- 10.0.0.2 ping statistics --- 00:22:33.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.323 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:22:33.323 00:22:33.323 --- 10.0.0.1 ping statistics --- 00:22:33.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.323 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1083460 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1083460 00:22:33.323 11:56:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1083460 ']' 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.323 [2024-10-11 11:56:17.054916] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:33.323 [2024-10-11 11:56:17.054964] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.323 [2024-10-11 11:56:17.114071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.323 [2024-10-11 11:56:17.143569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.323 [2024-10-11 11:56:17.143599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.323 [2024-10-11 11:56:17.143606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.323 [2024-10-11 11:56:17.143611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.323 [2024-10-11 11:56:17.143615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.323 [2024-10-11 11:56:17.144833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.323 [2024-10-11 11:56:17.144987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.323 [2024-10-11 11:56:17.145125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.323 [2024-10-11 11:56:17.145127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.323 [2024-10-11 11:56:17.279096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.323 Malloc1 00:22:33.323 [2024-10-11 11:56:17.395375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.323 Malloc2 00:22:33.323 Malloc3 00:22:33.323 Malloc4 00:22:33.323 Malloc5 00:22:33.323 Malloc6 00:22:33.323 Malloc7 00:22:33.323 Malloc8 00:22:33.323 Malloc9 00:22:33.323 Malloc10 00:22:33.323 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1083519 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1083519 /var/tmp/bdevperf.sock 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1083519 ']' 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.324 { 00:22:33.324 "params": { 00:22:33.324 "name": "Nvme$subsystem", 00:22:33.324 "trtype": "$TEST_TRANSPORT", 00:22:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.324 "adrfam": "ipv4", 00:22:33.324 "trsvcid": "$NVMF_PORT", 00:22:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.324 "hdgst": ${hdgst:-false}, 00:22:33.324 "ddgst": ${ddgst:-false} 00:22:33.324 }, 00:22:33.324 "method": "bdev_nvme_attach_controller" 00:22:33.324 } 00:22:33.324 EOF 00:22:33.324 )") 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.324 { 00:22:33.324 "params": { 00:22:33.324 "name": "Nvme$subsystem", 00:22:33.324 "trtype": "$TEST_TRANSPORT", 00:22:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.324 "adrfam": "ipv4", 00:22:33.324 "trsvcid": "$NVMF_PORT", 00:22:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.324 "hdgst": ${hdgst:-false}, 00:22:33.324 "ddgst": ${ddgst:-false} 00:22:33.324 }, 00:22:33.324 "method": "bdev_nvme_attach_controller" 00:22:33.324 } 00:22:33.324 EOF 00:22:33.324 )") 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.324 { 00:22:33.324 "params": { 00:22:33.324 "name": "Nvme$subsystem", 00:22:33.324 "trtype": "$TEST_TRANSPORT", 00:22:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.324 "adrfam": "ipv4", 00:22:33.324 "trsvcid": "$NVMF_PORT", 00:22:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.324 "hdgst": ${hdgst:-false}, 00:22:33.324 "ddgst": ${ddgst:-false} 00:22:33.324 }, 00:22:33.324 "method": "bdev_nvme_attach_controller" 00:22:33.324 } 00:22:33.324 EOF 00:22:33.324 )") 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.324 { 00:22:33.324 "params": { 00:22:33.324 "name": "Nvme$subsystem", 00:22:33.324 "trtype": "$TEST_TRANSPORT", 00:22:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.324 "adrfam": "ipv4", 00:22:33.324 "trsvcid": "$NVMF_PORT", 00:22:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.324 "hdgst": ${hdgst:-false}, 00:22:33.324 "ddgst": ${ddgst:-false} 00:22:33.324 }, 00:22:33.324 "method": "bdev_nvme_attach_controller" 00:22:33.324 } 00:22:33.324 EOF 00:22:33.324 )") 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.324 { 00:22:33.324 "params": { 00:22:33.324 "name": "Nvme$subsystem", 00:22:33.324 "trtype": "$TEST_TRANSPORT", 00:22:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.324 "adrfam": "ipv4", 00:22:33.324 "trsvcid": "$NVMF_PORT", 00:22:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.324 "hdgst": ${hdgst:-false}, 00:22:33.324 "ddgst": ${ddgst:-false} 00:22:33.324 }, 00:22:33.324 "method": "bdev_nvme_attach_controller" 00:22:33.324 } 00:22:33.324 EOF 00:22:33.324 )") 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.324 { 00:22:33.324 "params": { 00:22:33.324 "name": "Nvme$subsystem", 00:22:33.324 "trtype": "$TEST_TRANSPORT", 00:22:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.324 "adrfam": "ipv4", 00:22:33.324 "trsvcid": "$NVMF_PORT", 00:22:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.324 "hdgst": ${hdgst:-false}, 00:22:33.324 "ddgst": ${ddgst:-false} 00:22:33.324 }, 00:22:33.324 "method": "bdev_nvme_attach_controller" 00:22:33.324 } 00:22:33.324 EOF 00:22:33.324 )") 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.324 [2024-10-11 11:56:17.835484] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:33.324 [2024-10-11 11:56:17.835535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.324 { 00:22:33.324 "params": { 00:22:33.324 "name": "Nvme$subsystem", 00:22:33.324 "trtype": "$TEST_TRANSPORT", 00:22:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.324 "adrfam": "ipv4", 00:22:33.324 "trsvcid": "$NVMF_PORT", 00:22:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.324 "hdgst": ${hdgst:-false}, 00:22:33.324 "ddgst": ${ddgst:-false} 00:22:33.324 }, 00:22:33.324 "method": "bdev_nvme_attach_controller" 00:22:33.324 } 00:22:33.324 EOF 00:22:33.324 )") 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.324 { 00:22:33.324 "params": { 00:22:33.324 "name": "Nvme$subsystem", 00:22:33.324 "trtype": "$TEST_TRANSPORT", 00:22:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.324 "adrfam": "ipv4", 00:22:33.324 "trsvcid": "$NVMF_PORT", 00:22:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.324 "hdgst": ${hdgst:-false}, 00:22:33.324 "ddgst": ${ddgst:-false} 00:22:33.324 }, 00:22:33.324 "method": "bdev_nvme_attach_controller" 00:22:33.324 } 00:22:33.324 EOF 00:22:33.324 )") 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.324 { 00:22:33.324 "params": { 00:22:33.324 "name": "Nvme$subsystem", 00:22:33.324 "trtype": "$TEST_TRANSPORT", 00:22:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.324 "adrfam": "ipv4", 00:22:33.324 "trsvcid": "$NVMF_PORT", 00:22:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.324 "hdgst": ${hdgst:-false}, 00:22:33.324 "ddgst": ${ddgst:-false} 00:22:33.324 }, 00:22:33.324 "method": "bdev_nvme_attach_controller" 00:22:33.324 } 00:22:33.324 EOF 00:22:33.324 )") 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.324 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:33.325 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:33.325 { 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme$subsystem", 00:22:33.325 "trtype": "$TEST_TRANSPORT", 00:22:33.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "$NVMF_PORT", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.325 "hdgst": ${hdgst:-false}, 00:22:33.325 "ddgst": ${ddgst:-false} 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 } 00:22:33.325 EOF 00:22:33.325 )") 00:22:33.325 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:33.325 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:33.325 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:33.325 11:56:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme1", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 },{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme2", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 },{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme3", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 },{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme4", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 },{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme5", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 },{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme6", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 },{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme7", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 },{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme8", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 },{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme9", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 },{ 00:22:33.325 "params": { 00:22:33.325 "name": "Nvme10", 00:22:33.325 "trtype": "tcp", 00:22:33.325 "traddr": "10.0.0.2", 00:22:33.325 "adrfam": "ipv4", 00:22:33.325 "trsvcid": "4420", 00:22:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:33.325 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:33.325 "hdgst": false, 00:22:33.325 "ddgst": false 00:22:33.325 }, 00:22:33.325 "method": "bdev_nvme_attach_controller" 00:22:33.325 }' 00:22:33.325 [2024-10-11 11:56:17.914295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.325 [2024-10-11 11:56:17.951762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.234 11:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.234 11:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:35.234 11:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:35.234 11:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.234 11:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.234 11:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.234 11:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1083519 00:22:35.234 11:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:35.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1083519 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:35.235 11:56:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1083460 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.805 { 00:22:35.805 "params": { 00:22:35.805 "name": "Nvme$subsystem", 00:22:35.805 "trtype": "$TEST_TRANSPORT", 00:22:35.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.805 "adrfam": "ipv4", 00:22:35.805 "trsvcid": "$NVMF_PORT", 00:22:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.805 "hdgst": ${hdgst:-false}, 00:22:35.805 "ddgst": ${ddgst:-false} 00:22:35.805 }, 00:22:35.805 "method": "bdev_nvme_attach_controller" 00:22:35.805 } 00:22:35.805 EOF 00:22:35.805 )") 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.805 { 00:22:35.805 "params": { 00:22:35.805 "name": "Nvme$subsystem", 00:22:35.805 "trtype": "$TEST_TRANSPORT", 00:22:35.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.805 "adrfam": "ipv4", 00:22:35.805 "trsvcid": "$NVMF_PORT", 00:22:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.805 "hdgst": ${hdgst:-false}, 00:22:35.805 "ddgst": ${ddgst:-false} 00:22:35.805 }, 00:22:35.805 "method": "bdev_nvme_attach_controller" 00:22:35.805 } 00:22:35.805 EOF 00:22:35.805 )") 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.805 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.805 { 00:22:35.805 "params": { 00:22:35.805 "name": "Nvme$subsystem", 00:22:35.805 "trtype": "$TEST_TRANSPORT", 00:22:35.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.805 "adrfam": "ipv4", 00:22:35.805 "trsvcid": "$NVMF_PORT", 00:22:35.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.805 "hdgst": ${hdgst:-false}, 00:22:35.806 "ddgst": ${ddgst:-false} 00:22:35.806 }, 00:22:35.806 "method": "bdev_nvme_attach_controller" 00:22:35.806 } 00:22:35.806 EOF 00:22:35.806 )") 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.806 { 00:22:35.806 "params": { 00:22:35.806 "name": "Nvme$subsystem", 00:22:35.806 "trtype": "$TEST_TRANSPORT", 00:22:35.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.806 "adrfam": "ipv4", 00:22:35.806 "trsvcid": "$NVMF_PORT", 00:22:35.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.806 "hdgst": ${hdgst:-false}, 00:22:35.806 "ddgst": ${ddgst:-false} 00:22:35.806 }, 00:22:35.806 "method": "bdev_nvme_attach_controller" 00:22:35.806 } 00:22:35.806 EOF 00:22:35.806 )") 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.806 { 00:22:35.806 "params": { 00:22:35.806 "name": "Nvme$subsystem", 00:22:35.806 "trtype": "$TEST_TRANSPORT", 00:22:35.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.806 "adrfam": "ipv4", 00:22:35.806 "trsvcid": "$NVMF_PORT", 00:22:35.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.806 "hdgst": ${hdgst:-false}, 00:22:35.806 "ddgst": ${ddgst:-false} 00:22:35.806 }, 00:22:35.806 "method": "bdev_nvme_attach_controller" 00:22:35.806 } 00:22:35.806 EOF 00:22:35.806 )") 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.806 { 00:22:35.806 "params": { 00:22:35.806 "name": "Nvme$subsystem", 00:22:35.806 "trtype": "$TEST_TRANSPORT", 00:22:35.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.806 "adrfam": "ipv4", 00:22:35.806 "trsvcid": "$NVMF_PORT", 00:22:35.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.806 "hdgst": ${hdgst:-false}, 00:22:35.806 "ddgst": ${ddgst:-false} 00:22:35.806 }, 00:22:35.806 "method": "bdev_nvme_attach_controller" 00:22:35.806 } 00:22:35.806 EOF 00:22:35.806 )") 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.806 { 00:22:35.806 "params": { 00:22:35.806 "name": "Nvme$subsystem", 00:22:35.806 "trtype": "$TEST_TRANSPORT", 00:22:35.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.806 "adrfam": "ipv4", 00:22:35.806 "trsvcid": "$NVMF_PORT", 00:22:35.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.806 "hdgst": ${hdgst:-false}, 00:22:35.806 "ddgst": ${ddgst:-false} 00:22:35.806 }, 00:22:35.806 "method": "bdev_nvme_attach_controller" 00:22:35.806 } 00:22:35.806 EOF 00:22:35.806 )") 00:22:35.806 [2024-10-11 11:56:20.412575] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:35.806 [2024-10-11 11:56:20.412629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084209 ] 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.806 { 00:22:35.806 "params": { 00:22:35.806 "name": "Nvme$subsystem", 00:22:35.806 "trtype": "$TEST_TRANSPORT", 00:22:35.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.806 "adrfam": "ipv4", 00:22:35.806 "trsvcid": "$NVMF_PORT", 00:22:35.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.806 "hdgst": ${hdgst:-false}, 00:22:35.806 "ddgst": ${ddgst:-false} 00:22:35.806 }, 00:22:35.806 "method": "bdev_nvme_attach_controller" 00:22:35.806 } 00:22:35.806 EOF 00:22:35.806 )") 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.806 { 00:22:35.806 "params": { 00:22:35.806 "name": "Nvme$subsystem", 00:22:35.806 "trtype": "$TEST_TRANSPORT", 00:22:35.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.806 "adrfam": "ipv4", 00:22:35.806 "trsvcid": "$NVMF_PORT", 00:22:35.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.806 "hdgst": ${hdgst:-false}, 00:22:35.806 "ddgst": ${ddgst:-false} 00:22:35.806 }, 00:22:35.806 "method": "bdev_nvme_attach_controller" 00:22:35.806 } 00:22:35.806 EOF 00:22:35.806 )") 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:35.806 { 00:22:35.806 "params": { 00:22:35.806 "name": "Nvme$subsystem", 00:22:35.806 "trtype": "$TEST_TRANSPORT", 00:22:35.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.806 "adrfam": "ipv4", 00:22:35.806 "trsvcid": "$NVMF_PORT", 00:22:35.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.806 "hdgst": ${hdgst:-false}, 00:22:35.806 "ddgst": ${ddgst:-false} 00:22:35.806 }, 00:22:35.806 "method": "bdev_nvme_attach_controller" 00:22:35.806 } 00:22:35.806 EOF 00:22:35.806 )") 00:22:35.806 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:36.066 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:36.066 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:36.066 11:56:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:36.066 "params": { 00:22:36.066 "name": "Nvme1", 00:22:36.066 "trtype": "tcp", 00:22:36.066 "traddr": "10.0.0.2", 00:22:36.066 "adrfam": "ipv4", 00:22:36.066 "trsvcid": "4420", 00:22:36.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:36.066 "hdgst": false, 00:22:36.066 "ddgst": false 00:22:36.066 }, 00:22:36.066 "method": "bdev_nvme_attach_controller" 00:22:36.066 },{ 00:22:36.066 "params": { 00:22:36.066 "name": "Nvme2", 00:22:36.066 "trtype": "tcp", 00:22:36.066 "traddr": "10.0.0.2", 00:22:36.066 "adrfam": "ipv4", 00:22:36.066 "trsvcid": "4420", 00:22:36.066 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:36.066 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:36.066 "hdgst": false, 00:22:36.066 "ddgst": false 00:22:36.066 }, 00:22:36.066 "method": "bdev_nvme_attach_controller" 00:22:36.066 },{ 00:22:36.066 "params": { 00:22:36.066 "name": "Nvme3", 00:22:36.066 "trtype": "tcp", 00:22:36.066 "traddr": "10.0.0.2", 00:22:36.066 "adrfam": "ipv4", 00:22:36.066 "trsvcid": "4420", 00:22:36.066 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:36.066 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:36.066 "hdgst": false, 00:22:36.066 "ddgst": false 00:22:36.066 }, 00:22:36.066 "method": "bdev_nvme_attach_controller" 00:22:36.066 },{ 00:22:36.066 "params": { 00:22:36.066 "name": "Nvme4", 00:22:36.066 "trtype": "tcp", 00:22:36.066 "traddr": "10.0.0.2", 00:22:36.066 "adrfam": "ipv4", 00:22:36.066 "trsvcid": "4420", 00:22:36.066 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:36.066 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:36.066 "hdgst": false, 00:22:36.066 "ddgst": false 00:22:36.066 }, 00:22:36.066 "method": "bdev_nvme_attach_controller" 00:22:36.066 },{ 00:22:36.066 "params": { 00:22:36.066 "name": "Nvme5", 00:22:36.066 "trtype": "tcp", 00:22:36.066 "traddr": "10.0.0.2", 00:22:36.066 "adrfam": "ipv4", 00:22:36.066 "trsvcid": "4420", 00:22:36.066 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:36.066 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:36.066 "hdgst": false, 00:22:36.066 "ddgst": false 00:22:36.066 }, 00:22:36.066 "method": "bdev_nvme_attach_controller" 00:22:36.066 },{ 00:22:36.066 "params": { 00:22:36.066 "name": "Nvme6", 00:22:36.066 "trtype": "tcp", 00:22:36.066 "traddr": "10.0.0.2", 00:22:36.066 "adrfam": "ipv4", 00:22:36.066 "trsvcid": "4420", 00:22:36.066 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:36.066 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:36.066 "hdgst": false, 00:22:36.066 "ddgst": false 00:22:36.066 }, 00:22:36.066 "method": "bdev_nvme_attach_controller" 00:22:36.066 },{ 00:22:36.066 "params": { 00:22:36.066 "name": "Nvme7", 00:22:36.066 "trtype": "tcp", 00:22:36.066 "traddr": "10.0.0.2", 00:22:36.067 "adrfam": "ipv4", 00:22:36.067 "trsvcid": "4420", 00:22:36.067 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:36.067 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:36.067 "hdgst": false, 00:22:36.067 "ddgst": false 00:22:36.067 }, 00:22:36.067 "method": "bdev_nvme_attach_controller" 00:22:36.067 },{ 00:22:36.067 "params": { 00:22:36.067 "name": "Nvme8", 00:22:36.067 "trtype": "tcp", 00:22:36.067 "traddr": "10.0.0.2", 00:22:36.067 "adrfam": "ipv4", 00:22:36.067 "trsvcid": "4420", 00:22:36.067 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:36.067 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:36.067 "hdgst": false, 00:22:36.067 "ddgst": false 00:22:36.067 }, 00:22:36.067 "method": "bdev_nvme_attach_controller" 00:22:36.067 },{ 00:22:36.067 "params": { 00:22:36.067 "name": "Nvme9", 00:22:36.067 "trtype": "tcp", 00:22:36.067 "traddr": "10.0.0.2", 00:22:36.067 "adrfam": "ipv4", 00:22:36.067 "trsvcid": "4420", 00:22:36.067 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:36.067 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:36.067 "hdgst": false, 00:22:36.067 "ddgst": false 00:22:36.067 }, 00:22:36.067 "method": "bdev_nvme_attach_controller" 00:22:36.067 },{ 00:22:36.067 "params": { 00:22:36.067 "name": "Nvme10", 00:22:36.067 "trtype": "tcp", 00:22:36.067 "traddr": "10.0.0.2", 00:22:36.067 "adrfam": "ipv4", 00:22:36.067 "trsvcid": "4420", 00:22:36.067 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:36.067 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:36.067 "hdgst": false, 00:22:36.067 "ddgst": false 00:22:36.067 }, 00:22:36.067 "method": "bdev_nvme_attach_controller" 00:22:36.067 }' 00:22:36.067 [2024-10-11 11:56:20.490923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.067 [2024-10-11 11:56:20.526957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.449 Running I/O for 1 seconds... 00:22:38.650 1869.00 IOPS, 116.81 MiB/s 00:22:38.650 Latency(us) 00:22:38.650 [2024-10-11T09:56:23.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.650 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme1n1 : 1.11 239.80 14.99 0.00 0.00 257479.28 32986.45 212336.64 00:22:38.650 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme2n1 : 1.14 224.65 14.04 0.00 0.00 277342.72 16930.13 246415.36 00:22:38.650 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme3n1 : 1.08 237.22 14.83 0.00 0.00 257521.71 19442.35 258648.75 00:22:38.650 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme4n1 : 1.18 270.33 16.90 0.00 0.00 222809.34 13544.11 248162.99 00:22:38.650 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme5n1 : 1.14 223.65 13.98 0.00 0.00 264448.21 14854.83 269134.51 00:22:38.650 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme6n1 : 1.17 218.21 13.64 0.00 0.00 265683.41 16274.77 249910.61 00:22:38.650 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme7n1 : 1.19 268.34 16.77 0.00 0.00 213442.22 13981.01 237677.23 00:22:38.650 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme8n1 : 1.19 269.14 16.82 0.00 0.00 208986.97 20316.16 227191.47 00:22:38.650 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme9n1 : 1.20 267.05 16.69 0.00 0.00 207016.45 12451.84 244667.73 00:22:38.650 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:38.650 Verification LBA range: start 0x0 length 0x400 00:22:38.650 Nvme10n1 : 1.18 220.66 13.79 0.00 0.00 244709.06 3153.92 269134.51 00:22:38.650 [2024-10-11T09:56:23.282Z] =================================================================================================================== 00:22:38.650 [2024-10-11T09:56:23.282Z] Total : 2439.04 152.44 0.00 0.00 239392.14 3153.92 269134.51 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.911 rmmod nvme_tcp 00:22:38.911 rmmod nvme_fabrics 00:22:38.911 rmmod nvme_keyring 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1083460 ']' 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1083460 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1083460 ']' 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1083460 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1083460 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1083460' 00:22:38.911 killing process with pid 1083460 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1083460 00:22:38.911 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1083460 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.171 11:56:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.716 00:22:41.716 real 0m16.031s 00:22:41.716 user 0m32.150s 00:22:41.716 sys 0m6.504s 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:41.716 ************************************ 00:22:41.716 END TEST nvmf_shutdown_tc1 00:22:41.716 ************************************ 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:41.716 ************************************ 00:22:41.716 START TEST nvmf_shutdown_tc2 00:22:41.716 ************************************ 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.716 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:41.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:41.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:41.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:41.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.717 11:56:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.717 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:41.717 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.717 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.717 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.717 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:41.717 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:41.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:22:41.717 00:22:41.718 --- 10.0.0.2 ping statistics --- 00:22:41.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.718 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:22:41.718 00:22:41.718 --- 10.0.0.1 ping statistics --- 00:22:41.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.718 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1085322 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1085322 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1085322 ']' 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.718 11:56:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:41.718 [2024-10-11 11:56:26.254275] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:41.718 [2024-10-11 11:56:26.254337] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.718 [2024-10-11 11:56:26.340632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.978 [2024-10-11 11:56:26.375909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.978 [2024-10-11 11:56:26.375941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.978 [2024-10-11 11:56:26.375947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.978 [2024-10-11 11:56:26.375951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.978 [2024-10-11 11:56:26.375956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.978 [2024-10-11 11:56:26.377552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.978 [2024-10-11 11:56:26.377746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.978 [2024-10-11 11:56:26.378081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.978 [2024-10-11 11:56:26.378081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.549 [2024-10-11 11:56:27.100711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.549 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.550 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.550 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.550 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.550 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.550 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.550 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:42.550 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:42.550 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.550 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:42.809 Malloc1 00:22:42.809 [2024-10-11 11:56:27.218385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.809 Malloc2 00:22:42.809 Malloc3 00:22:42.809 Malloc4 00:22:42.809 Malloc5 00:22:42.809 Malloc6 00:22:42.809 Malloc7 00:22:43.071 Malloc8 00:22:43.071 Malloc9 00:22:43.071 Malloc10 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1085705 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1085705 /var/tmp/bdevperf.sock 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1085705 ']' 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.071 { 00:22:43.071 "params": { 00:22:43.071 "name": "Nvme$subsystem", 00:22:43.071 "trtype": "$TEST_TRANSPORT", 00:22:43.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.071 "adrfam": "ipv4", 00:22:43.071 "trsvcid": "$NVMF_PORT", 00:22:43.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.071 "hdgst": ${hdgst:-false}, 00:22:43.071 "ddgst": ${ddgst:-false} 00:22:43.071 }, 00:22:43.071 "method": "bdev_nvme_attach_controller" 00:22:43.071 } 00:22:43.071 EOF 00:22:43.071 )") 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.071 { 00:22:43.071 "params": { 00:22:43.071 "name": "Nvme$subsystem", 00:22:43.071 "trtype": "$TEST_TRANSPORT", 00:22:43.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.071 "adrfam": "ipv4", 00:22:43.071 "trsvcid": "$NVMF_PORT", 00:22:43.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.071 "hdgst": ${hdgst:-false}, 00:22:43.071 "ddgst": ${ddgst:-false} 00:22:43.071 }, 00:22:43.071 "method": "bdev_nvme_attach_controller" 00:22:43.071 } 00:22:43.071 EOF 00:22:43.071 )") 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.071 { 00:22:43.071 "params": { 00:22:43.071 "name": "Nvme$subsystem", 00:22:43.071 "trtype": "$TEST_TRANSPORT", 00:22:43.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.071 "adrfam": "ipv4", 00:22:43.071 "trsvcid": "$NVMF_PORT", 00:22:43.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.071 "hdgst": ${hdgst:-false}, 00:22:43.071 "ddgst": ${ddgst:-false} 00:22:43.071 }, 00:22:43.071 "method": "bdev_nvme_attach_controller" 00:22:43.071 } 00:22:43.071 EOF 00:22:43.071 )") 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.071 { 00:22:43.071 "params": { 00:22:43.071 "name": "Nvme$subsystem", 00:22:43.071 "trtype": "$TEST_TRANSPORT", 00:22:43.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.071 "adrfam": "ipv4", 00:22:43.071 "trsvcid": "$NVMF_PORT", 00:22:43.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.071 "hdgst": ${hdgst:-false}, 00:22:43.071 "ddgst": ${ddgst:-false} 00:22:43.071 }, 00:22:43.071 "method": "bdev_nvme_attach_controller" 00:22:43.071 } 00:22:43.071 EOF 00:22:43.071 )") 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.071 { 00:22:43.071 "params": { 00:22:43.071 "name": "Nvme$subsystem", 00:22:43.071 "trtype": "$TEST_TRANSPORT", 00:22:43.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.071 "adrfam": "ipv4", 00:22:43.071 "trsvcid": "$NVMF_PORT", 00:22:43.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.071 "hdgst": ${hdgst:-false}, 00:22:43.071 "ddgst": ${ddgst:-false} 00:22:43.071 }, 00:22:43.071 "method": "bdev_nvme_attach_controller" 00:22:43.071 } 00:22:43.071 EOF 00:22:43.071 )") 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.071 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.071 { 00:22:43.071 "params": { 00:22:43.071 "name": "Nvme$subsystem", 00:22:43.071 "trtype": "$TEST_TRANSPORT", 00:22:43.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.071 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "$NVMF_PORT", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.072 "hdgst": ${hdgst:-false}, 00:22:43.072 "ddgst": ${ddgst:-false} 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 } 00:22:43.072 EOF 00:22:43.072 )") 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.072 [2024-10-11 11:56:27.661510] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:43.072 [2024-10-11 11:56:27.661565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085705 ] 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.072 { 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme$subsystem", 00:22:43.072 "trtype": "$TEST_TRANSPORT", 00:22:43.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "$NVMF_PORT", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.072 "hdgst": ${hdgst:-false}, 00:22:43.072 "ddgst": ${ddgst:-false} 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 } 00:22:43.072 EOF 00:22:43.072 )") 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.072 { 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme$subsystem", 00:22:43.072 "trtype": "$TEST_TRANSPORT", 00:22:43.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "$NVMF_PORT", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.072 "hdgst": ${hdgst:-false}, 00:22:43.072 "ddgst": ${ddgst:-false} 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 } 00:22:43.072 EOF 00:22:43.072 )") 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.072 { 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme$subsystem", 00:22:43.072 "trtype": "$TEST_TRANSPORT", 00:22:43.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "$NVMF_PORT", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.072 "hdgst": ${hdgst:-false}, 00:22:43.072 "ddgst": ${ddgst:-false} 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 } 00:22:43.072 EOF 00:22:43.072 )") 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:43.072 { 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme$subsystem", 00:22:43.072 "trtype": "$TEST_TRANSPORT", 00:22:43.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "$NVMF_PORT", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.072 "hdgst": ${hdgst:-false}, 00:22:43.072 "ddgst": ${ddgst:-false} 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 } 00:22:43.072 EOF 00:22:43.072 )") 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:43.072 11:56:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme1", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 },{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme2", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 },{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme3", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 },{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme4", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 },{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme5", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 },{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme6", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 },{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme7", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 },{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme8", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 },{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme9", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 },{ 00:22:43.072 "params": { 00:22:43.072 "name": "Nvme10", 00:22:43.072 "trtype": "tcp", 00:22:43.072 "traddr": "10.0.0.2", 00:22:43.072 "adrfam": "ipv4", 00:22:43.072 "trsvcid": "4420", 00:22:43.072 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:43.072 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:43.072 "hdgst": false, 00:22:43.072 "ddgst": false 00:22:43.072 }, 00:22:43.072 "method": "bdev_nvme_attach_controller" 00:22:43.072 }' 00:22:43.333 [2024-10-11 11:56:27.739109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.333 [2024-10-11 11:56:27.776373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.716 Running I/O for 10 seconds... 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.716 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.717 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.717 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:44.717 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:44.717 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:44.976 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:44.977 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:44.977 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:44.977 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:44.977 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.977 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.977 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.977 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:44.977 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:44.977 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1085705 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1085705 ']' 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1085705 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:45.249 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.533 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1085705 00:22:45.533 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:45.533 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:45.533 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1085705' 00:22:45.533 killing process with pid 1085705 00:22:45.533 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1085705 00:22:45.533 11:56:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1085705 00:22:45.533 2308.00 IOPS, 144.25 MiB/s [2024-10-11T09:56:30.165Z] Received shutdown signal, test time was about 1.018491 seconds 00:22:45.533 00:22:45.533 Latency(us) 00:22:45.533 [2024-10-11T09:56:30.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.533 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme1n1 : 0.97 263.51 16.47 0.00 0.00 240049.07 20097.71 241172.48 00:22:45.533 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme2n1 : 0.96 266.17 16.64 0.00 0.00 232704.85 18896.21 222822.40 00:22:45.533 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme3n1 : 0.97 264.73 16.55 0.00 0.00 229262.72 19333.12 248162.99 00:22:45.533 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme4n1 : 0.96 265.44 16.59 0.00 0.00 224134.83 20534.61 255153.49 00:22:45.533 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme5n1 : 0.94 204.05 12.75 0.00 0.00 284363.66 18240.85 255153.49 00:22:45.533 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme6n1 : 0.95 206.71 12.92 0.00 0.00 273569.89 3126.61 248162.99 00:22:45.533 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme7n1 : 1.02 251.57 15.72 0.00 0.00 212748.80 14199.47 249910.61 00:22:45.533 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme8n1 : 0.95 272.86 17.05 0.00 0.00 198045.91 6307.84 248162.99 00:22:45.533 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme9n1 : 0.95 201.97 12.62 0.00 0.00 262737.64 21408.43 244667.73 00:22:45.533 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:45.533 Verification LBA range: start 0x0 length 0x400 00:22:45.533 Nvme10n1 : 0.96 200.47 12.53 0.00 0.00 258876.30 20097.71 267386.88 00:22:45.533 [2024-10-11T09:56:30.165Z] =================================================================================================================== 00:22:45.533 [2024-10-11T09:56:30.165Z] Total : 2397.49 149.84 0.00 0.00 238502.51 3126.61 267386.88 00:22:45.810 11:56:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1085322 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.800 rmmod nvme_tcp 00:22:46.800 rmmod nvme_fabrics 00:22:46.800 rmmod nvme_keyring 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:46.800 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1085322 ']' 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1085322 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1085322 ']' 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1085322 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1085322 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1085322' 00:22:46.801 killing process with pid 1085322 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1085322 00:22:46.801 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1085322 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.061 11:56:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.605 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.605 00:22:49.605 real 0m7.794s 00:22:49.605 user 0m23.432s 00:22:49.605 sys 0m1.247s 00:22:49.605 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:49.605 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:49.605 ************************************ 00:22:49.605 END TEST nvmf_shutdown_tc2 00:22:49.605 ************************************ 00:22:49.605 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:49.605 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:49.606 ************************************ 00:22:49.606 START TEST nvmf_shutdown_tc3 00:22:49.606 ************************************ 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:49.606 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:49.606 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:49.606 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:49.606 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.606 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.607 11:56:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:22:49.607 00:22:49.607 --- 10.0.0.2 ping statistics --- 00:22:49.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.607 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:22:49.607 00:22:49.607 --- 10.0.0.1 ping statistics --- 00:22:49.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.607 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1087077 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1087077 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1087077 ']' 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.607 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:49.607 [2024-10-11 11:56:34.140082] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:49.607 [2024-10-11 11:56:34.140142] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.607 [2024-10-11 11:56:34.227894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.867 [2024-10-11 11:56:34.269223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.867 [2024-10-11 11:56:34.269264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.867 [2024-10-11 11:56:34.269270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.867 [2024-10-11 11:56:34.269275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.867 [2024-10-11 11:56:34.269280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.867 [2024-10-11 11:56:34.271132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.867 [2024-10-11 11:56:34.271259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.867 [2024-10-11 11:56:34.271379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.867 [2024-10-11 11:56:34.271381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.439 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.439 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:50.439 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:50.439 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.439 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.439 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.439 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:50.439 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.439 11:56:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.439 [2024-10-11 11:56:35.000928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.439 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.699 Malloc1 00:22:50.699 [2024-10-11 11:56:35.111483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.699 Malloc2 00:22:50.699 Malloc3 00:22:50.699 Malloc4 00:22:50.699 Malloc5 00:22:50.699 Malloc6 00:22:50.699 Malloc7 00:22:50.960 Malloc8 00:22:50.960 Malloc9 00:22:50.960 Malloc10 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1087298 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1087298 /var/tmp/bdevperf.sock 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1087298 ']' 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.960 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 [2024-10-11 11:56:35.557009] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:50.961 [2024-10-11 11:56:35.557063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087298 ] 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:50.961 { 00:22:50.961 "params": { 00:22:50.961 "name": "Nvme$subsystem", 00:22:50.961 "trtype": "$TEST_TRANSPORT", 00:22:50.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.961 "adrfam": "ipv4", 00:22:50.961 "trsvcid": "$NVMF_PORT", 00:22:50.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.961 "hdgst": ${hdgst:-false}, 00:22:50.961 "ddgst": ${ddgst:-false} 00:22:50.961 }, 00:22:50.961 "method": "bdev_nvme_attach_controller" 00:22:50.961 } 00:22:50.961 EOF 00:22:50.961 )") 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:50.961 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:51.223 11:56:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme1", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 },{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme2", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 },{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme3", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 },{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme4", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 },{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme5", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 },{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme6", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 },{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme7", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 },{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme8", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 },{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme9", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 },{ 00:22:51.223 "params": { 00:22:51.223 "name": "Nvme10", 00:22:51.223 "trtype": "tcp", 00:22:51.223 "traddr": "10.0.0.2", 00:22:51.223 "adrfam": "ipv4", 00:22:51.223 "trsvcid": "4420", 00:22:51.223 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.223 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.223 "hdgst": false, 00:22:51.223 "ddgst": false 00:22:51.223 }, 00:22:51.223 "method": "bdev_nvme_attach_controller" 00:22:51.223 }' 00:22:51.223 [2024-10-11 11:56:35.636446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.223 [2024-10-11 11:56:35.673150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.605 Running I/O for 10 seconds... 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.605 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.866 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.866 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:52.866 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:52.866 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:53.127 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=137 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 137 -ge 100 ']' 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1087077 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1087077 ']' 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1087077 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1087077 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1087077' 00:22:53.404 killing process with pid 1087077 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1087077 00:22:53.404 11:56:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1087077 00:22:53.404 [2024-10-11 11:56:37.956190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.404 [2024-10-11 11:56:37.956358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.956545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edca60 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-10-11 11:56:37.960200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:53.405 the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.405 [2024-10-11 11:56:37.960224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-10-11 11:56:37.960240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:53.405 the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-11 11:56:37.960252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.405 the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-10-11 11:56:37.960264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:53.405 the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.405 [2024-10-11 11:56:37.960280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with [2024-10-11 11:56:37.960285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:22:53.405 id:0 cdw10:00000000 cdw11:00000000 00:22:53.405 [2024-10-11 11:56:37.960293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.405 [2024-10-11 11:56:37.960296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.406 [2024-10-11 11:56:37.960298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee31b0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf5f0 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.960404] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.406 [2024-10-11 11:56:37.961800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.961999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edcf30 is same with the state(6) to be set 00:22:53.406 [2024-10-11 11:56:37.962590] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.406 [2024-10-11 11:56:37.963445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.963777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd400 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd8f0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.964995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.407 [2024-10-11 11:56:37.965066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edddc0 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.965997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.408 [2024-10-11 11:56:37.966143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.966266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede290 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.409 [2024-10-11 11:56:37.967300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede760 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.967998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.968117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.977960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf120 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.981094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.410 [2024-10-11 11:56:37.981116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.410 [2024-10-11 11:56:37.981125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.410 [2024-10-11 11:56:37.981133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.410 [2024-10-11 11:56:37.981142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.410 [2024-10-11 11:56:37.981150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.410 [2024-10-11 11:56:37.981158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.410 [2024-10-11 11:56:37.981165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.410 [2024-10-11 11:56:37.981173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130d150 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.981202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.410 [2024-10-11 11:56:37.981210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.410 [2024-10-11 11:56:37.981219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.410 [2024-10-11 11:56:37.981227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.410 [2024-10-11 11:56:37.981235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.410 [2024-10-11 11:56:37.981243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.410 [2024-10-11 11:56:37.981251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.410 [2024-10-11 11:56:37.981258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.410 [2024-10-11 11:56:37.981266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304130 is same with the state(6) to be set 00:22:53.410 [2024-10-11 11:56:37.981289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee31b0 (9): Bad file descriptor 00:22:53.410 [2024-10-11 11:56:37.981322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.410 [2024-10-11 11:56:37.981330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee0c30 is same with the state(6) to be set 00:22:53.411 [2024-10-11 11:56:37.981406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06b0 is same with the state(6) to be set 00:22:53.411 [2024-10-11 11:56:37.981495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13336e0 is same with the state(6) to be set 00:22:53.411 [2024-10-11 11:56:37.981583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1334a20 is same with the state(6) to be set 00:22:53.411 [2024-10-11 11:56:37.981673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed9e90 is same with the state(6) to be set 00:22:53.411 [2024-10-11 11:56:37.981759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfb610 is same with the state(6) to be set 00:22:53.411 [2024-10-11 11:56:37.981847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.411 [2024-10-11 11:56:37.981901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.981908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130d470 is same with the state(6) to be set 00:22:53.411 [2024-10-11 11:56:37.982464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.411 [2024-10-11 11:56:37.982695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.411 [2024-10-11 11:56:37.982705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.982990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.982997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.412 [2024-10-11 11:56:37.983312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.412 [2024-10-11 11:56:37.983321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983636] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x140ae80 was disconnected and freed. reset controller. 00:22:53.413 [2024-10-11 11:56:37.983763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.983987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.983995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.984004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.984011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.984020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.984028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.984037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.984044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.413 [2024-10-11 11:56:37.984054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.413 [2024-10-11 11:56:37.984063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.414 [2024-10-11 11:56:37.984764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.414 [2024-10-11 11:56:37.984772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.984781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.984788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.984798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.984805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.984814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.984822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.984832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.984839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.984890] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12d9e40 was disconnected and freed. reset controller. 00:22:53.415 [2024-10-11 11:56:37.987726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:53.415 [2024-10-11 11:56:37.987757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:53.415 [2024-10-11 11:56:37.987779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1304130 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.987792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d150 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.987837] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.415 [2024-10-11 11:56:37.988400] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.415 [2024-10-11 11:56:37.988716] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.415 [2024-10-11 11:56:37.988751] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.415 [2024-10-11 11:56:37.988795] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.415 [2024-10-11 11:56:37.988838] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:53.415 [2024-10-11 11:56:37.989075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.415 [2024-10-11 11:56:37.989093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130d150 with addr=10.0.0.2, port=4420 00:22:53.415 [2024-10-11 11:56:37.989102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130d150 is same with the state(6) to be set 00:22:53.415 [2024-10-11 11:56:37.989395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.415 [2024-10-11 11:56:37.989405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1304130 with addr=10.0.0.2, port=4420 00:22:53.415 [2024-10-11 11:56:37.989413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304130 is same with the state(6) to be set 00:22:53.415 [2024-10-11 11:56:37.989498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d150 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.989512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1304130 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.989556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:53.415 [2024-10-11 11:56:37.989564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:53.415 [2024-10-11 11:56:37.989572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:53.415 [2024-10-11 11:56:37.989586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:53.415 [2024-10-11 11:56:37.989593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:53.415 [2024-10-11 11:56:37.989600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:53.415 [2024-10-11 11:56:37.989649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.415 [2024-10-11 11:56:37.989658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.415 [2024-10-11 11:56:37.991114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee0c30 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.991132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee06b0 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.991150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13336e0 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.991170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1334a20 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.991188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed9e90 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.991204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfb610 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.991223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d470 (9): Bad file descriptor 00:22:53.415 [2024-10-11 11:56:37.991328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.415 [2024-10-11 11:56:37.991685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.415 [2024-10-11 11:56:37.991692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.991986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.991993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.416 [2024-10-11 11:56:37.992309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.416 [2024-10-11 11:56:37.992319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:37.992326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:37.992335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:37.992343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:37.992352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:37.992359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:37.992368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:37.992376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:37.992385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:37.992392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:37.992402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:37.992409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:37.992417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1412d80 is same with the state(6) to be set 00:22:53.417 [2024-10-11 11:56:37.993751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:53.417 [2024-10-11 11:56:37.994136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.417 [2024-10-11 11:56:37.994151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee31b0 with addr=10.0.0.2, port=4420 00:22:53.417 [2024-10-11 11:56:37.994160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee31b0 is same with the state(6) to be set 00:22:53.417 [2024-10-11 11:56:37.994462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee31b0 (9): Bad file descriptor 00:22:53.417 [2024-10-11 11:56:37.994518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.417 [2024-10-11 11:56:37.994526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:53.417 [2024-10-11 11:56:37.994534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.417 [2024-10-11 11:56:37.994582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.417 [2024-10-11 11:56:37.998483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:53.417 [2024-10-11 11:56:37.998502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:53.417 [2024-10-11 11:56:37.998869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.417 [2024-10-11 11:56:37.998884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1304130 with addr=10.0.0.2, port=4420 00:22:53.417 [2024-10-11 11:56:37.998892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304130 is same with the state(6) to be set 00:22:53.417 [2024-10-11 11:56:37.999189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.417 [2024-10-11 11:56:37.999200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130d150 with addr=10.0.0.2, port=4420 00:22:53.417 [2024-10-11 11:56:37.999207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130d150 is same with the state(6) to be set 00:22:53.417 [2024-10-11 11:56:37.999247] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1304130 (9): Bad file descriptor 00:22:53.417 [2024-10-11 11:56:37.999257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d150 (9): Bad file descriptor 00:22:53.417 [2024-10-11 11:56:37.999295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:53.417 [2024-10-11 11:56:37.999301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:53.417 [2024-10-11 11:56:37.999309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:53.417 [2024-10-11 11:56:37.999321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:53.417 [2024-10-11 11:56:37.999327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:53.417 [2024-10-11 11:56:37.999334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:53.417 [2024-10-11 11:56:37.999376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.417 [2024-10-11 11:56:37.999384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.417 [2024-10-11 11:56:38.001273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.417 [2024-10-11 11:56:38.001586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.417 [2024-10-11 11:56:38.001596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.001983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.001993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.418 [2024-10-11 11:56:38.002308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.418 [2024-10-11 11:56:38.002318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.002325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.002334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.002342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.002351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.002359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.002368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.002376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.002384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f700 is same with the state(6) to be set 00:22:53.419 [2024-10-11 11:56:38.003664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.003989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.003999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.419 [2024-10-11 11:56:38.004244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.419 [2024-10-11 11:56:38.004253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.004777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.004785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cca80 is same with the state(6) to be set 00:22:53.420 [2024-10-11 11:56:38.006068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.006084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.006097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.006106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.006117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.006127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.006138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.006147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.006157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.006166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.420 [2024-10-11 11:56:38.006177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.420 [2024-10-11 11:56:38.006186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.421 [2024-10-11 11:56:38.006732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.421 [2024-10-11 11:56:38.006739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.006989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.006996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.422 [2024-10-11 11:56:38.007174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.422 [2024-10-11 11:56:38.007182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.007190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e7200 is same with the state(6) to be set 00:22:53.423 [2024-10-11 11:56:38.008457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.008984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.423 [2024-10-11 11:56:38.008993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.423 [2024-10-11 11:56:38.009001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.424 [2024-10-11 11:56:38.009529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.424 [2024-10-11 11:56:38.009538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.009545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.009554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e8730 is same with the state(6) to be set 00:22:53.425 [2024-10-11 11:56:38.010836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.010850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.010861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.010869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.010878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.010886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.010895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.010902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.010912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.010919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.010929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.010936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.010946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.010953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.010966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.010973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.010982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.010989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.010999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.425 [2024-10-11 11:56:38.011391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.425 [2024-10-11 11:56:38.011398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.011932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.011942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e9c60 is same with the state(6) to be set 00:22:53.426 [2024-10-11 11:56:38.013213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.013226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.013238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.013246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.013255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.013264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.013275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.013284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.013294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.013302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.013315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.013323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.426 [2024-10-11 11:56:38.013333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.426 [2024-10-11 11:56:38.013340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.427 [2024-10-11 11:56:38.013945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.427 [2024-10-11 11:56:38.013954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.013963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.013974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.013981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.013990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.013998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.014315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.014324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222eeb0 is same with the state(6) to be set 00:22:53.428 [2024-10-11 11:56:38.015584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.428 [2024-10-11 11:56:38.015756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.428 [2024-10-11 11:56:38.015765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.015985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.015992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.429 [2024-10-11 11:56:38.016247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.429 [2024-10-11 11:56:38.016257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.430 [2024-10-11 11:56:38.016675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.430 [2024-10-11 11:56:38.016683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140c390 is same with the state(6) to be set 00:22:53.430 [2024-10-11 11:56:38.018200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:53.430 [2024-10-11 11:56:38.018225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:53.430 [2024-10-11 11:56:38.018236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:53.430 [2024-10-11 11:56:38.018247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:53.430 [2024-10-11 11:56:38.018329] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.430 [2024-10-11 11:56:38.018342] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.430 [2024-10-11 11:56:38.018353] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.430 [2024-10-11 11:56:38.018437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:53.430 [2024-10-11 11:56:38.018447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:53.692 task offset: 24576 on job bdev=Nvme9n1 fails 00:22:53.692 00:22:53.692 Latency(us) 00:22:53.692 [2024-10-11T09:56:38.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.692 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme1n1 ended in about 0.97 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme1n1 : 0.97 202.66 12.67 65.84 0.00 235719.31 19770.03 241172.48 00:22:53.692 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme2n1 ended in about 0.98 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme2n1 : 0.98 199.59 12.47 65.17 0.00 234397.38 9939.63 237677.23 00:22:53.692 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme3n1 ended in about 0.98 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme3n1 : 0.98 195.04 12.19 65.01 0.00 233939.20 20862.29 244667.73 00:22:53.692 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme4n1 ended in about 0.97 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme4n1 : 0.97 198.76 12.42 66.25 0.00 224617.76 4369.07 255153.49 00:22:53.692 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme5n1 ended in about 0.99 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme5n1 : 0.99 129.71 8.11 64.86 0.00 300209.78 16602.45 253405.87 00:22:53.692 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme6n1 ended in about 0.99 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme6n1 : 0.99 194.11 12.13 64.70 0.00 220890.67 18568.53 246415.36 00:22:53.692 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme7n1 ended in about 0.99 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme7n1 : 0.99 193.64 12.10 64.55 0.00 216666.56 9502.72 256901.12 00:22:53.692 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme8n1 ended in about 0.99 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme8n1 : 0.99 128.78 8.05 64.39 0.00 283418.17 21736.11 270882.13 00:22:53.692 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme9n1 ended in about 0.96 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme9n1 : 0.96 199.03 12.44 66.34 0.00 200489.23 4287.15 255153.49 00:22:53.692 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.692 Job: Nvme10n1 ended in about 1.00 seconds with error 00:22:53.692 Verification LBA range: start 0x0 length 0x400 00:22:53.692 Nvme10n1 : 1.00 128.48 8.03 64.24 0.00 271651.27 18459.31 269134.51 00:22:53.692 [2024-10-11T09:56:38.324Z] =================================================================================================================== 00:22:53.692 [2024-10-11T09:56:38.324Z] Total : 1769.81 110.61 651.36 0.00 238708.52 4287.15 270882.13 00:22:53.692 [2024-10-11 11:56:38.042192] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:53.692 [2024-10-11 11:56:38.042220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:53.692 [2024-10-11 11:56:38.042650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.692 [2024-10-11 11:56:38.042674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed9e90 with addr=10.0.0.2, port=4420 00:22:53.692 [2024-10-11 11:56:38.042684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed9e90 is same with the state(6) to be set 00:22:53.692 [2024-10-11 11:56:38.043053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.692 [2024-10-11 11:56:38.043063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee0c30 with addr=10.0.0.2, port=4420 00:22:53.692 [2024-10-11 11:56:38.043071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee0c30 is same with the state(6) to be set 00:22:53.692 [2024-10-11 11:56:38.043287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.693 [2024-10-11 11:56:38.043296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130d470 with addr=10.0.0.2, port=4420 00:22:53.693 [2024-10-11 11:56:38.043304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130d470 is same with the state(6) to be set 00:22:53.693 [2024-10-11 11:56:38.043526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.693 [2024-10-11 11:56:38.043537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdfb610 with addr=10.0.0.2, port=4420 00:22:53.693 [2024-10-11 11:56:38.043544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfb610 is same with the state(6) to be set 00:22:53.693 [2024-10-11 11:56:38.045427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:53.693 [2024-10-11 11:56:38.045441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:53.693 [2024-10-11 11:56:38.045758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.693 [2024-10-11 11:56:38.045772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1334a20 with addr=10.0.0.2, port=4420 00:22:53.693 [2024-10-11 11:56:38.045780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1334a20 is same with the state(6) to be set 00:22:53.693 [2024-10-11 11:56:38.045977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.693 [2024-10-11 11:56:38.045989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee06b0 with addr=10.0.0.2, port=4420 00:22:53.693 [2024-10-11 11:56:38.045996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee06b0 is same with the state(6) to be set 00:22:53.693 [2024-10-11 11:56:38.046293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.693 [2024-10-11 11:56:38.046303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13336e0 with addr=10.0.0.2, port=4420 00:22:53.693 [2024-10-11 11:56:38.046310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13336e0 is same with the state(6) to be set 00:22:53.693 [2024-10-11 11:56:38.046321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed9e90 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.046333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee0c30 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.046342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d470 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.046351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfb610 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.046380] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.693 [2024-10-11 11:56:38.046401] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.693 [2024-10-11 11:56:38.046412] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.693 [2024-10-11 11:56:38.046423] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.693 [2024-10-11 11:56:38.046434] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:53.693 [2024-10-11 11:56:38.046739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:53.693 [2024-10-11 11:56:38.047007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.693 [2024-10-11 11:56:38.047020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xee31b0 with addr=10.0.0.2, port=4420 00:22:53.693 [2024-10-11 11:56:38.047028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee31b0 is same with the state(6) to be set 00:22:53.693 [2024-10-11 11:56:38.047369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.693 [2024-10-11 11:56:38.047379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130d150 with addr=10.0.0.2, port=4420 00:22:53.693 [2024-10-11 11:56:38.047387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130d150 is same with the state(6) to be set 00:22:53.693 [2024-10-11 11:56:38.047396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1334a20 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.047405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee06b0 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.047414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13336e0 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.047422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.047429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.047437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:53.693 [2024-10-11 11:56:38.047449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.047455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.047462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:53.693 [2024-10-11 11:56:38.047473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.047480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.047487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:53.693 [2024-10-11 11:56:38.047497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.047504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.047511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:53.693 [2024-10-11 11:56:38.047590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 [2024-10-11 11:56:38.047599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 [2024-10-11 11:56:38.047605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 [2024-10-11 11:56:38.047611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 [2024-10-11 11:56:38.047813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.693 [2024-10-11 11:56:38.047824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1304130 with addr=10.0.0.2, port=4420 00:22:53.693 [2024-10-11 11:56:38.047831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304130 is same with the state(6) to be set 00:22:53.693 [2024-10-11 11:56:38.047841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee31b0 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.047850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d150 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.047858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.047864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.047871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:53.693 [2024-10-11 11:56:38.047881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.047887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.047894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:53.693 [2024-10-11 11:56:38.047903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.047910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.047916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:53.693 [2024-10-11 11:56:38.047945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 [2024-10-11 11:56:38.047952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 [2024-10-11 11:56:38.047959] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 [2024-10-11 11:56:38.047966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1304130 (9): Bad file descriptor 00:22:53.693 [2024-10-11 11:56:38.047974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.047980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.047987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.693 [2024-10-11 11:56:38.047996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.048002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.048009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:53.693 [2024-10-11 11:56:38.048038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 [2024-10-11 11:56:38.048045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 [2024-10-11 11:56:38.048051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:53.693 [2024-10-11 11:56:38.048057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:53.693 [2024-10-11 11:56:38.048064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:53.693 [2024-10-11 11:56:38.048090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.693 11:56:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1087298 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1087298 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1087298 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.636 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.636 rmmod nvme_tcp 00:22:54.897 rmmod nvme_fabrics 00:22:54.897 rmmod nvme_keyring 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1087077 ']' 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1087077 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1087077 ']' 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1087077 00:22:54.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1087077) - No such process 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1087077 is not found' 00:22:54.897 Process with pid 1087077 is not found 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.897 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.898 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.898 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.898 11:56:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.810 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.810 00:22:56.810 real 0m7.690s 00:22:56.810 user 0m18.605s 00:22:56.810 sys 0m1.256s 00:22:56.810 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:56.810 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:56.810 ************************************ 00:22:56.810 END TEST nvmf_shutdown_tc3 00:22:56.810 ************************************ 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:57.071 ************************************ 00:22:57.071 START TEST nvmf_shutdown_tc4 00:22:57.071 ************************************ 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.071 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:57.072 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:57.072 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:57.072 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:57.072 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.072 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:22:57.333 00:22:57.333 --- 10.0.0.2 ping statistics --- 00:22:57.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.333 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:22:57.333 00:22:57.333 --- 10.0.0.1 ping statistics --- 00:22:57.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.333 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1088688 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1088688 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1088688 ']' 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.333 11:56:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:57.333 [2024-10-11 11:56:41.909098] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:22:57.333 [2024-10-11 11:56:41.909158] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.593 [2024-10-11 11:56:41.998938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.593 [2024-10-11 11:56:42.029655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.593 [2024-10-11 11:56:42.029686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.593 [2024-10-11 11:56:42.029692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.593 [2024-10-11 11:56:42.029697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.593 [2024-10-11 11:56:42.029701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.593 [2024-10-11 11:56:42.030908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.593 [2024-10-11 11:56:42.031110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.594 [2024-10-11 11:56:42.031259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:57.594 [2024-10-11 11:56:42.031260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.165 [2024-10-11 11:56:42.747050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.165 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.425 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.425 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.425 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.425 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.426 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:58.426 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:58.426 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:58.426 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.426 11:56:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.426 Malloc1 00:22:58.426 [2024-10-11 11:56:42.861315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.426 Malloc2 00:22:58.426 Malloc3 00:22:58.426 Malloc4 00:22:58.426 Malloc5 00:22:58.426 Malloc6 00:22:58.686 Malloc7 00:22:58.686 Malloc8 00:22:58.686 Malloc9 00:22:58.686 Malloc10 00:22:58.686 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.686 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:58.686 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.686 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:58.686 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1089073 00:22:58.686 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:58.686 11:56:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:58.946 [2024-10-11 11:56:43.328121] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1088688 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1088688 ']' 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1088688 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1088688 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1088688' 00:23:04.238 killing process with pid 1088688 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1088688 00:23:04.238 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1088688 00:23:04.238 [2024-10-11 11:56:48.333061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49780 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49780 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49780 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49780 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49780 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49780 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49780 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49c50 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49c50 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49c50 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49c50 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49c50 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49c50 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.333391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49c50 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.336488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4a970 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.336506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4a970 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.336512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4a970 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.336516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4a970 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.336521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4a970 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.337757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bcb0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.337772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bcb0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.337777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bcb0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.337782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bcb0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.337788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bcb0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.337793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bcb0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.337798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bcb0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.337802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4bcb0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4c180 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4c180 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4c180 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4c180 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc73050 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc73050 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc73050 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc73050 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4b7e0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4b7e0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4b7e0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4b7e0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4b7e0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.338586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4b7e0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.339773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74a90 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.339789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74a90 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.339794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74a90 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.339800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74a90 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.339805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74a90 is same with the state(6) to be set 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 starting I/O failed: -6 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 [2024-10-11 11:56:48.340055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74f60 is same with the state(6) to be set 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 [2024-10-11 11:56:48.340069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74f60 is same with the state(6) to be set 00:23:04.238 starting I/O failed: -6 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 starting I/O failed: -6 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 starting I/O failed: -6 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 [2024-10-11 11:56:48.340249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc75430 is same with the state(6) to be set 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 [2024-10-11 11:56:48.340266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc75430 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.340271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc75430 is same with the state(6) to be set 00:23:04.238 starting I/O failed: -6 00:23:04.238 [2024-10-11 11:56:48.340276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc75430 is same with the state(6) to be set 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 starting I/O failed: -6 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 starting I/O failed: -6 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 starting I/O failed: -6 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 starting I/O failed: -6 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 starting I/O failed: -6 00:23:04.238 Write completed with error (sct=0, sc=8) 00:23:04.238 [2024-10-11 11:56:48.340642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc745c0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.340658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc745c0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.340664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc745c0 is same with the state(6) to be set 00:23:04.238 [2024-10-11 11:56:48.340679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc745c0 is same with the state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.340684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc745c0 is same with the state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.340689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc745c0 is same with the state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.340648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.239 [2024-10-11 11:56:48.340694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc745c0 is same with the state(6) to be set 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 [2024-10-11 11:56:48.341114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc738a0 is same with tstarting I/O failed: -6 00:23:04.239 he state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.341127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc738a0 is same with the state(6) to be set 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 [2024-10-11 11:56:48.341133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc738a0 is same with the state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.341138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc738a0 is same with the state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.341143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc738a0 is same with the state(6) to be set 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 [2024-10-11 11:56:48.341147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc738a0 is same with the state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.341152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc738a0 is same with the state(6) to be set 00:23:04.239 starting I/O failed: -6 00:23:04.239 [2024-10-11 11:56:48.341157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc738a0 is same with the state(6) to be set 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 [2024-10-11 11:56:48.341162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc738a0 is same with the state(6) to be set 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 [2024-10-11 11:56:48.341512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 [2024-10-11 11:56:48.341729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74240 is same with the state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.341741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74240 is same with the state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.341746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74240 is same with tWrite completed with error (sct=0, sc=8) 00:23:04.239 he state(6) to be set 00:23:04.239 [2024-10-11 11:56:48.341763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74240 is same with the state(6) to be set 00:23:04.239 starting I/O failed: -6 00:23:04.239 [2024-10-11 11:56:48.341768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc74240 is same with the state(6) to be set 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 [2024-10-11 11:56:48.342445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.239 Write completed with error (sct=0, sc=8) 00:23:04.239 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 [2024-10-11 11:56:48.344075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.240 NVMe io qpair process completion error 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 [2024-10-11 11:56:48.345142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 [2024-10-11 11:56:48.345964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.240 Write completed with error (sct=0, sc=8) 00:23:04.240 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 [2024-10-11 11:56:48.346893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 [2024-10-11 11:56:48.348363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.241 NVMe io qpair process completion error 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 [2024-10-11 11:56:48.349373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 Write completed with error (sct=0, sc=8) 00:23:04.241 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 [2024-10-11 11:56:48.350259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 [2024-10-11 11:56:48.351180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 starting I/O failed: -6 00:23:04.242 NVMe io qpair process completion error 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 starting I/O failed: -6 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.242 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 [2024-10-11 11:56:48.353107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.243 starting I/O failed: -6 00:23:04.243 starting I/O failed: -6 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 [2024-10-11 11:56:48.354079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 [2024-10-11 11:56:48.354985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.243 starting I/O failed: -6 00:23:04.243 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 [2024-10-11 11:56:48.356422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.244 NVMe io qpair process completion error 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 [2024-10-11 11:56:48.357677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 [2024-10-11 11:56:48.358517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.244 Write completed with error (sct=0, sc=8) 00:23:04.244 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 [2024-10-11 11:56:48.359487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.245 starting I/O failed: -6 00:23:04.245 starting I/O failed: -6 00:23:04.245 starting I/O failed: -6 00:23:04.245 starting I/O failed: -6 00:23:04.245 starting I/O failed: -6 00:23:04.245 starting I/O failed: -6 00:23:04.245 starting I/O failed: -6 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 [2024-10-11 11:56:48.363634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.245 NVMe io qpair process completion error 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 [2024-10-11 11:56:48.364774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 starting I/O failed: -6 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.245 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 [2024-10-11 11:56:48.365575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 [2024-10-11 11:56:48.366501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.246 starting I/O failed: -6 00:23:04.246 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 [2024-10-11 11:56:48.368110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.247 NVMe io qpair process completion error 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 [2024-10-11 11:56:48.369345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.247 starting I/O failed: -6 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 [2024-10-11 11:56:48.370170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 Write completed with error (sct=0, sc=8) 00:23:04.247 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 [2024-10-11 11:56:48.371169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 [2024-10-11 11:56:48.372804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.248 NVMe io qpair process completion error 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 [2024-10-11 11:56:48.374233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 starting I/O failed: -6 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.248 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 [2024-10-11 11:56:48.375075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 [2024-10-11 11:56:48.376006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 [2024-10-11 11:56:48.378176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.249 NVMe io qpair process completion error 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.249 starting I/O failed: -6 00:23:04.249 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 [2024-10-11 11:56:48.379289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 [2024-10-11 11:56:48.380112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 [2024-10-11 11:56:48.381063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.250 Write completed with error (sct=0, sc=8) 00:23:04.250 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 [2024-10-11 11:56:48.382760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:23:04.251 NVMe io qpair process completion error 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.251 starting I/O failed: -6 00:23:04.251 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 [2024-10-11 11:56:48.384982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 starting I/O failed: -6 00:23:04.252 [2024-10-11 11:56:48.388226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:23:04.252 NVMe io qpair process completion error 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Write completed with error (sct=0, sc=8) 00:23:04.252 Initializing NVMe Controllers 00:23:04.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.252 Controller IO queue size 128, less than required. 00:23:04.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:04.252 Controller IO queue size 128, less than required. 00:23:04.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:04.253 Controller IO queue size 128, less than required. 00:23:04.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:04.253 Controller IO queue size 128, less than required. 00:23:04.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:04.253 Controller IO queue size 128, less than required. 00:23:04.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:04.253 Controller IO queue size 128, less than required. 00:23:04.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:04.253 Controller IO queue size 128, less than required. 00:23:04.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:04.253 Controller IO queue size 128, less than required. 00:23:04.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:04.253 Controller IO queue size 128, less than required. 00:23:04.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:04.253 Controller IO queue size 128, less than required. 00:23:04.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:04.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:04.253 Initialization complete. Launching workers. 00:23:04.253 ======================================================== 00:23:04.253 Latency(us) 00:23:04.253 Device Information : IOPS MiB/s Average min max 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1855.81 79.74 68988.75 717.26 123686.66 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1857.56 79.82 69151.75 854.98 124215.43 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1858.64 79.86 68911.93 683.49 123910.38 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1884.77 80.99 67982.60 613.78 119483.59 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1838.83 79.01 69735.84 801.96 126297.84 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1886.73 81.07 67988.20 702.45 128046.61 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1861.04 79.97 68953.14 645.99 122063.15 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1888.47 81.15 67980.67 698.57 121899.99 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1857.34 79.81 69132.18 520.10 123214.28 00:23:04.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1881.50 80.85 67570.51 691.65 122067.94 00:23:04.253 ======================================================== 00:23:04.253 Total : 18670.69 802.26 68634.05 520.10 128046.61 00:23:04.253 00:23:04.253 [2024-10-11 11:56:48.394117] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558e20 is same with the state(6) to be set 00:23:04.253 [2024-10-11 11:56:48.394160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15591e0 is same with the state(6) to be set 00:23:04.253 [2024-10-11 11:56:48.394190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15579c0 is same with the state(6) to be set 00:23:04.253 [2024-10-11 11:56:48.394219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558350 is same with the state(6) to be set 00:23:04.253 [2024-10-11 11:56:48.394247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1559000 is same with the state(6) to be set 00:23:04.253 [2024-10-11 11:56:48.394276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155d670 is same with the state(6) to be set 00:23:04.253 [2024-10-11 11:56:48.394304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155d9a0 is same with the state(6) to be set 00:23:04.253 [2024-10-11 11:56:48.394332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1558020 is same with the state(6) to be set 00:23:04.253 [2024-10-11 11:56:48.394360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1557cf0 is same with the state(6) to be set 00:23:04.253 [2024-10-11 11:56:48.394387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155d340 is same with the state(6) to be set 00:23:04.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:04.253 11:56:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1089073 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1089073 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1089073 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.197 rmmod nvme_tcp 00:23:05.197 rmmod nvme_fabrics 00:23:05.197 rmmod nvme_keyring 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1088688 ']' 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1088688 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1088688 ']' 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1088688 00:23:05.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1088688) - No such process 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1088688 is not found' 00:23:05.197 Process with pid 1088688 is not found 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.197 11:56:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.114 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:07.375 00:23:07.375 real 0m10.259s 00:23:07.375 user 0m28.103s 00:23:07.375 sys 0m3.837s 00:23:07.375 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.375 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.375 ************************************ 00:23:07.375 END TEST nvmf_shutdown_tc4 00:23:07.375 ************************************ 00:23:07.375 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:07.375 00:23:07.375 real 0m42.361s 00:23:07.375 user 1m42.582s 00:23:07.375 sys 0m13.175s 00:23:07.375 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.375 11:56:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:07.375 ************************************ 00:23:07.375 END TEST nvmf_shutdown 00:23:07.375 ************************************ 00:23:07.375 11:56:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:07.375 00:23:07.375 real 12m32.103s 00:23:07.375 user 26m26.202s 00:23:07.375 sys 3m40.023s 00:23:07.375 11:56:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.375 11:56:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:07.375 ************************************ 00:23:07.375 END TEST nvmf_target_extra 00:23:07.375 ************************************ 00:23:07.375 11:56:51 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:07.375 11:56:51 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:07.375 11:56:51 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.375 11:56:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:07.375 ************************************ 00:23:07.375 START TEST nvmf_host 00:23:07.375 ************************************ 00:23:07.375 11:56:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:07.638 * Looking for test storage... 00:23:07.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:07.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.638 --rc genhtml_branch_coverage=1 00:23:07.638 --rc genhtml_function_coverage=1 00:23:07.638 --rc genhtml_legend=1 00:23:07.638 --rc geninfo_all_blocks=1 00:23:07.638 --rc geninfo_unexecuted_blocks=1 00:23:07.638 00:23:07.638 ' 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:07.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.638 --rc genhtml_branch_coverage=1 00:23:07.638 --rc genhtml_function_coverage=1 00:23:07.638 --rc genhtml_legend=1 00:23:07.638 --rc geninfo_all_blocks=1 00:23:07.638 --rc geninfo_unexecuted_blocks=1 00:23:07.638 00:23:07.638 ' 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:07.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.638 --rc genhtml_branch_coverage=1 00:23:07.638 --rc genhtml_function_coverage=1 00:23:07.638 --rc genhtml_legend=1 00:23:07.638 --rc geninfo_all_blocks=1 00:23:07.638 --rc geninfo_unexecuted_blocks=1 00:23:07.638 00:23:07.638 ' 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:07.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.638 --rc genhtml_branch_coverage=1 00:23:07.638 --rc genhtml_function_coverage=1 00:23:07.638 --rc genhtml_legend=1 00:23:07.638 --rc geninfo_all_blocks=1 00:23:07.638 --rc geninfo_unexecuted_blocks=1 00:23:07.638 00:23:07.638 ' 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.638 11:56:52 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.659 11:56:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.659 11:56:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.659 11:56:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.659 11:56:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.660 ************************************ 00:23:07.660 START TEST nvmf_multicontroller 00:23:07.660 ************************************ 00:23:07.660 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:07.922 * Looking for test storage... 00:23:07.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.922 --rc genhtml_branch_coverage=1 00:23:07.922 --rc genhtml_function_coverage=1 00:23:07.922 --rc genhtml_legend=1 00:23:07.922 --rc geninfo_all_blocks=1 00:23:07.922 --rc geninfo_unexecuted_blocks=1 00:23:07.922 00:23:07.922 ' 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.922 --rc genhtml_branch_coverage=1 00:23:07.922 --rc genhtml_function_coverage=1 00:23:07.922 --rc genhtml_legend=1 00:23:07.922 --rc geninfo_all_blocks=1 00:23:07.922 --rc geninfo_unexecuted_blocks=1 00:23:07.922 00:23:07.922 ' 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.922 --rc genhtml_branch_coverage=1 00:23:07.922 --rc genhtml_function_coverage=1 00:23:07.922 --rc genhtml_legend=1 00:23:07.922 --rc geninfo_all_blocks=1 00:23:07.922 --rc geninfo_unexecuted_blocks=1 00:23:07.922 00:23:07.922 ' 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:07.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.922 --rc genhtml_branch_coverage=1 00:23:07.922 --rc genhtml_function_coverage=1 00:23:07.922 --rc genhtml_legend=1 00:23:07.922 --rc geninfo_all_blocks=1 00:23:07.922 --rc geninfo_unexecuted_blocks=1 00:23:07.922 00:23:07.922 ' 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.922 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:07.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:07.923 11:56:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.071 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:16.072 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:16.072 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:16.072 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:16.072 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:16.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:23:16.072 00:23:16.072 --- 10.0.0.2 ping statistics --- 00:23:16.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.072 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:23:16.072 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:23:16.072 00:23:16.072 --- 10.0.0.1 ping statistics --- 00:23:16.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.073 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1094485 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1094485 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1094485 ']' 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.073 11:56:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.073 [2024-10-11 11:57:00.022569] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:16.073 [2024-10-11 11:57:00.022643] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.073 [2024-10-11 11:57:00.115446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:16.073 [2024-10-11 11:57:00.170361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.073 [2024-10-11 11:57:00.170420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.073 [2024-10-11 11:57:00.170429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.073 [2024-10-11 11:57:00.170437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.073 [2024-10-11 11:57:00.170443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.073 [2024-10-11 11:57:00.172448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.073 [2024-10-11 11:57:00.172609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.073 [2024-10-11 11:57:00.172610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.335 [2024-10-11 11:57:00.891534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.335 Malloc0 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.335 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.597 [2024-10-11 11:57:00.975816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.597 [2024-10-11 11:57:00.987724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.597 11:57:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.597 Malloc1 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1094894 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1094894 /var/tmp/bdevperf.sock 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1094894 ']' 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.597 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.542 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.542 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:17.542 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:17.542 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.542 11:57:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.542 NVMe0n1 00:23:17.542 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.542 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.542 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.542 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:17.542 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.542 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.542 1 00:23:17.542 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:17.542 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.805 request: 00:23:17.805 { 00:23:17.805 "name": "NVMe0", 00:23:17.805 "trtype": "tcp", 00:23:17.805 "traddr": "10.0.0.2", 00:23:17.805 "adrfam": "ipv4", 00:23:17.805 "trsvcid": "4420", 00:23:17.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.805 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:17.805 "hostaddr": "10.0.0.1", 00:23:17.805 "prchk_reftag": false, 00:23:17.805 "prchk_guard": false, 00:23:17.805 "hdgst": false, 00:23:17.805 "ddgst": false, 00:23:17.805 "allow_unrecognized_csi": false, 00:23:17.805 "method": "bdev_nvme_attach_controller", 00:23:17.805 "req_id": 1 00:23:17.805 } 00:23:17.805 Got JSON-RPC error response 00:23:17.805 response: 00:23:17.805 { 00:23:17.805 "code": -114, 00:23:17.805 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:17.805 } 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.805 request: 00:23:17.805 { 00:23:17.805 "name": "NVMe0", 00:23:17.805 "trtype": "tcp", 00:23:17.805 "traddr": "10.0.0.2", 00:23:17.805 "adrfam": "ipv4", 00:23:17.805 "trsvcid": "4420", 00:23:17.805 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:17.805 "hostaddr": "10.0.0.1", 00:23:17.805 "prchk_reftag": false, 00:23:17.805 "prchk_guard": false, 00:23:17.805 "hdgst": false, 00:23:17.805 "ddgst": false, 00:23:17.805 "allow_unrecognized_csi": false, 00:23:17.805 "method": "bdev_nvme_attach_controller", 00:23:17.805 "req_id": 1 00:23:17.805 } 00:23:17.805 Got JSON-RPC error response 00:23:17.805 response: 00:23:17.805 { 00:23:17.805 "code": -114, 00:23:17.805 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:17.805 } 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.805 request: 00:23:17.805 { 00:23:17.805 "name": "NVMe0", 00:23:17.805 "trtype": "tcp", 00:23:17.805 "traddr": "10.0.0.2", 00:23:17.805 "adrfam": "ipv4", 00:23:17.805 "trsvcid": "4420", 00:23:17.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.805 "hostaddr": "10.0.0.1", 00:23:17.805 "prchk_reftag": false, 00:23:17.805 "prchk_guard": false, 00:23:17.805 "hdgst": false, 00:23:17.805 "ddgst": false, 00:23:17.805 "multipath": "disable", 00:23:17.805 "allow_unrecognized_csi": false, 00:23:17.805 "method": "bdev_nvme_attach_controller", 00:23:17.805 "req_id": 1 00:23:17.805 } 00:23:17.805 Got JSON-RPC error response 00:23:17.805 response: 00:23:17.805 { 00:23:17.805 "code": -114, 00:23:17.805 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:17.805 } 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:17.805 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.806 request: 00:23:17.806 { 00:23:17.806 "name": "NVMe0", 00:23:17.806 "trtype": "tcp", 00:23:17.806 "traddr": "10.0.0.2", 00:23:17.806 "adrfam": "ipv4", 00:23:17.806 "trsvcid": "4420", 00:23:17.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.806 "hostaddr": "10.0.0.1", 00:23:17.806 "prchk_reftag": false, 00:23:17.806 "prchk_guard": false, 00:23:17.806 "hdgst": false, 00:23:17.806 "ddgst": false, 00:23:17.806 "multipath": "failover", 00:23:17.806 "allow_unrecognized_csi": false, 00:23:17.806 "method": "bdev_nvme_attach_controller", 00:23:17.806 "req_id": 1 00:23:17.806 } 00:23:17.806 Got JSON-RPC error response 00:23:17.806 response: 00:23:17.806 { 00:23:17.806 "code": -114, 00:23:17.806 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:17.806 } 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.806 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.067 NVMe0n1 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.067 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:18.067 11:57:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.452 { 00:23:19.452 "results": [ 00:23:19.452 { 00:23:19.452 "job": "NVMe0n1", 00:23:19.452 "core_mask": "0x1", 00:23:19.452 "workload": "write", 00:23:19.452 "status": "finished", 00:23:19.452 "queue_depth": 128, 00:23:19.452 "io_size": 4096, 00:23:19.452 "runtime": 1.006561, 00:23:19.452 "iops": 28752.35579363794, 00:23:19.452 "mibps": 112.3138898188982, 00:23:19.452 "io_failed": 0, 00:23:19.452 "io_timeout": 0, 00:23:19.452 "avg_latency_us": 4441.992242608525, 00:23:19.452 "min_latency_us": 2116.266666666667, 00:23:19.452 "max_latency_us": 12014.933333333332 00:23:19.452 } 00:23:19.452 ], 00:23:19.452 "core_count": 1 00:23:19.452 } 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1094894 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1094894 ']' 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1094894 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1094894 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1094894' 00:23:19.452 killing process with pid 1094894 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1094894 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1094894 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.452 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:19.453 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:19.453 [2024-10-11 11:57:01.116432] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:19.453 [2024-10-11 11:57:01.116518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094894 ] 00:23:19.453 [2024-10-11 11:57:01.198225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.453 [2024-10-11 11:57:01.251907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.453 [2024-10-11 11:57:02.532034] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 9924ae89-da16-490c-bf46-d151ffc9960f already exists 00:23:19.453 [2024-10-11 11:57:02.532065] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:9924ae89-da16-490c-bf46-d151ffc9960f alias for bdev NVMe1n1 00:23:19.453 [2024-10-11 11:57:02.532075] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:19.453 Running I/O for 1 seconds... 00:23:19.453 28729.00 IOPS, 112.22 MiB/s 00:23:19.453 Latency(us) 00:23:19.453 [2024-10-11T09:57:04.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.453 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:19.453 NVMe0n1 : 1.01 28752.36 112.31 0.00 0.00 4441.99 2116.27 12014.93 00:23:19.453 [2024-10-11T09:57:04.085Z] =================================================================================================================== 00:23:19.453 [2024-10-11T09:57:04.085Z] Total : 28752.36 112.31 0.00 0.00 4441.99 2116.27 12014.93 00:23:19.453 Received shutdown signal, test time was about 1.000000 seconds 00:23:19.453 00:23:19.453 Latency(us) 00:23:19.453 [2024-10-11T09:57:04.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.453 [2024-10-11T09:57:04.085Z] =================================================================================================================== 00:23:19.453 [2024-10-11T09:57:04.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.453 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:19.453 rmmod nvme_tcp 00:23:19.453 rmmod nvme_fabrics 00:23:19.453 rmmod nvme_keyring 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1094485 ']' 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1094485 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1094485 ']' 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1094485 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:19.453 11:57:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1094485 00:23:19.453 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:19.453 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:19.453 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1094485' 00:23:19.453 killing process with pid 1094485 00:23:19.453 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1094485 00:23:19.453 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1094485 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.714 11:57:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.628 11:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.889 00:23:21.889 real 0m14.061s 00:23:21.889 user 0m17.310s 00:23:21.889 sys 0m6.557s 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.889 ************************************ 00:23:21.889 END TEST nvmf_multicontroller 00:23:21.889 ************************************ 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.889 ************************************ 00:23:21.889 START TEST nvmf_aer 00:23:21.889 ************************************ 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:21.889 * Looking for test storage... 00:23:21.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:21.889 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.150 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:22.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.150 --rc genhtml_branch_coverage=1 00:23:22.150 --rc genhtml_function_coverage=1 00:23:22.150 --rc genhtml_legend=1 00:23:22.150 --rc geninfo_all_blocks=1 00:23:22.150 --rc geninfo_unexecuted_blocks=1 00:23:22.150 00:23:22.151 ' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:22.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.151 --rc genhtml_branch_coverage=1 00:23:22.151 --rc genhtml_function_coverage=1 00:23:22.151 --rc genhtml_legend=1 00:23:22.151 --rc geninfo_all_blocks=1 00:23:22.151 --rc geninfo_unexecuted_blocks=1 00:23:22.151 00:23:22.151 ' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:22.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.151 --rc genhtml_branch_coverage=1 00:23:22.151 --rc genhtml_function_coverage=1 00:23:22.151 --rc genhtml_legend=1 00:23:22.151 --rc geninfo_all_blocks=1 00:23:22.151 --rc geninfo_unexecuted_blocks=1 00:23:22.151 00:23:22.151 ' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:22.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.151 --rc genhtml_branch_coverage=1 00:23:22.151 --rc genhtml_function_coverage=1 00:23:22.151 --rc genhtml_legend=1 00:23:22.151 --rc geninfo_all_blocks=1 00:23:22.151 --rc geninfo_unexecuted_blocks=1 00:23:22.151 00:23:22.151 ' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:22.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.151 11:57:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:30.297 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:30.297 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.297 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:30.298 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:30.298 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:30.298 11:57:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:30.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:23:30.298 00:23:30.298 --- 10.0.0.2 ping statistics --- 00:23:30.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.298 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:23:30.298 00:23:30.298 --- 10.0.0.1 ping statistics --- 00:23:30.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.298 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1100095 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1100095 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1100095 ']' 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.298 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.298 [2024-10-11 11:57:14.172316] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:30.298 [2024-10-11 11:57:14.172385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.298 [2024-10-11 11:57:14.261419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.298 [2024-10-11 11:57:14.314709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.298 [2024-10-11 11:57:14.314762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.298 [2024-10-11 11:57:14.314771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.298 [2024-10-11 11:57:14.314778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.298 [2024-10-11 11:57:14.314784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.298 [2024-10-11 11:57:14.316761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.298 [2024-10-11 11:57:14.316922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.298 [2024-10-11 11:57:14.317082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.298 [2024-10-11 11:57:14.317081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.560 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.560 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:30.560 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:30.560 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:30.560 11:57:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.560 [2024-10-11 11:57:15.048465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.560 Malloc0 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.560 [2024-10-11 11:57:15.122679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.560 [ 00:23:30.560 { 00:23:30.560 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:30.560 "subtype": "Discovery", 00:23:30.560 "listen_addresses": [], 00:23:30.560 "allow_any_host": true, 00:23:30.560 "hosts": [] 00:23:30.560 }, 00:23:30.560 { 00:23:30.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.560 "subtype": "NVMe", 00:23:30.560 "listen_addresses": [ 00:23:30.560 { 00:23:30.560 "trtype": "TCP", 00:23:30.560 "adrfam": "IPv4", 00:23:30.560 "traddr": "10.0.0.2", 00:23:30.560 "trsvcid": "4420" 00:23:30.560 } 00:23:30.560 ], 00:23:30.560 "allow_any_host": true, 00:23:30.560 "hosts": [], 00:23:30.560 "serial_number": "SPDK00000000000001", 00:23:30.560 "model_number": "SPDK bdev Controller", 00:23:30.560 "max_namespaces": 2, 00:23:30.560 "min_cntlid": 1, 00:23:30.560 "max_cntlid": 65519, 00:23:30.560 "namespaces": [ 00:23:30.560 { 00:23:30.560 "nsid": 1, 00:23:30.560 "bdev_name": "Malloc0", 00:23:30.560 "name": "Malloc0", 00:23:30.560 "nguid": "A31A282C0FA540578E5F2CD726AD8E51", 00:23:30.560 "uuid": "a31a282c-0fa5-4057-8e5f-2cd726ad8e51" 00:23:30.560 } 00:23:30.560 ] 00:23:30.560 } 00:23:30.560 ] 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1100338 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:30.560 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:30.821 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:30.821 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:30.821 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:30.821 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:30.821 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:30.821 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:23:30.821 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:23:30.821 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.082 Malloc1 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.082 Asynchronous Event Request test 00:23:31.082 Attaching to 10.0.0.2 00:23:31.082 Attached to 10.0.0.2 00:23:31.082 Registering asynchronous event callbacks... 00:23:31.082 Starting namespace attribute notice tests for all controllers... 00:23:31.082 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:31.082 aer_cb - Changed Namespace 00:23:31.082 Cleaning up... 00:23:31.082 [ 00:23:31.082 { 00:23:31.082 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:31.082 "subtype": "Discovery", 00:23:31.082 "listen_addresses": [], 00:23:31.082 "allow_any_host": true, 00:23:31.082 "hosts": [] 00:23:31.082 }, 00:23:31.082 { 00:23:31.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.082 "subtype": "NVMe", 00:23:31.082 "listen_addresses": [ 00:23:31.082 { 00:23:31.082 "trtype": "TCP", 00:23:31.082 "adrfam": "IPv4", 00:23:31.082 "traddr": "10.0.0.2", 00:23:31.082 "trsvcid": "4420" 00:23:31.082 } 00:23:31.082 ], 00:23:31.082 "allow_any_host": true, 00:23:31.082 "hosts": [], 00:23:31.082 "serial_number": "SPDK00000000000001", 00:23:31.082 "model_number": "SPDK bdev Controller", 00:23:31.082 "max_namespaces": 2, 00:23:31.082 "min_cntlid": 1, 00:23:31.082 "max_cntlid": 65519, 00:23:31.082 "namespaces": [ 00:23:31.082 { 00:23:31.082 "nsid": 1, 00:23:31.082 "bdev_name": "Malloc0", 00:23:31.082 "name": "Malloc0", 00:23:31.082 "nguid": "A31A282C0FA540578E5F2CD726AD8E51", 00:23:31.082 "uuid": "a31a282c-0fa5-4057-8e5f-2cd726ad8e51" 00:23:31.082 }, 00:23:31.082 { 00:23:31.082 "nsid": 2, 00:23:31.082 "bdev_name": "Malloc1", 00:23:31.082 "name": "Malloc1", 00:23:31.082 "nguid": "A56C486844E64EE4A366C701A7E758E3", 00:23:31.082 "uuid": "a56c4868-44e6-4ee4-a366-c701a7e758e3" 00:23:31.082 } 00:23:31.082 ] 00:23:31.082 } 00:23:31.082 ] 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1100338 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.082 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.343 rmmod nvme_tcp 00:23:31.343 rmmod nvme_fabrics 00:23:31.343 rmmod nvme_keyring 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1100095 ']' 00:23:31.343 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1100095 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1100095 ']' 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1100095 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1100095 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1100095' 00:23:31.344 killing process with pid 1100095 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1100095 00:23:31.344 11:57:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1100095 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.607 11:57:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.523 11:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.523 00:23:33.523 real 0m11.763s 00:23:33.523 user 0m9.020s 00:23:33.523 sys 0m6.228s 00:23:33.523 11:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:33.523 11:57:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.523 ************************************ 00:23:33.523 END TEST nvmf_aer 00:23:33.523 ************************************ 00:23:33.523 11:57:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:33.523 11:57:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:33.523 11:57:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:33.523 11:57:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.788 ************************************ 00:23:33.788 START TEST nvmf_async_init 00:23:33.788 ************************************ 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:33.788 * Looking for test storage... 00:23:33.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:33.788 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.789 --rc genhtml_branch_coverage=1 00:23:33.789 --rc genhtml_function_coverage=1 00:23:33.789 --rc genhtml_legend=1 00:23:33.789 --rc geninfo_all_blocks=1 00:23:33.789 --rc geninfo_unexecuted_blocks=1 00:23:33.789 00:23:33.789 ' 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.789 --rc genhtml_branch_coverage=1 00:23:33.789 --rc genhtml_function_coverage=1 00:23:33.789 --rc genhtml_legend=1 00:23:33.789 --rc geninfo_all_blocks=1 00:23:33.789 --rc geninfo_unexecuted_blocks=1 00:23:33.789 00:23:33.789 ' 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.789 --rc genhtml_branch_coverage=1 00:23:33.789 --rc genhtml_function_coverage=1 00:23:33.789 --rc genhtml_legend=1 00:23:33.789 --rc geninfo_all_blocks=1 00:23:33.789 --rc geninfo_unexecuted_blocks=1 00:23:33.789 00:23:33.789 ' 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:33.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.789 --rc genhtml_branch_coverage=1 00:23:33.789 --rc genhtml_function_coverage=1 00:23:33.789 --rc genhtml_legend=1 00:23:33.789 --rc geninfo_all_blocks=1 00:23:33.789 --rc geninfo_unexecuted_blocks=1 00:23:33.789 00:23:33.789 ' 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.789 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:34.050 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.050 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.050 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.050 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.050 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:34.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=eff9d4c1f3a841e8b9a97c6e4800c131 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:34.051 11:57:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.198 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.198 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.198 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.198 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.198 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.198 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.198 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:42.199 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:42.199 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:42.199 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:42.199 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:23:42.199 00:23:42.199 --- 10.0.0.2 ping statistics --- 00:23:42.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.199 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:23:42.199 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:23:42.199 00:23:42.199 --- 10.0.0.1 ping statistics --- 00:23:42.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.199 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1104655 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1104655 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1104655 ']' 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.200 11:57:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.200 [2024-10-11 11:57:26.020308] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:42.200 [2024-10-11 11:57:26.020376] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.200 [2024-10-11 11:57:26.110008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.200 [2024-10-11 11:57:26.160767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.200 [2024-10-11 11:57:26.160821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.200 [2024-10-11 11:57:26.160830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.200 [2024-10-11 11:57:26.160837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.200 [2024-10-11 11:57:26.160844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.200 [2024-10-11 11:57:26.161601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.461 [2024-10-11 11:57:26.903435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.461 null0 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g eff9d4c1f3a841e8b9a97c6e4800c131 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.461 [2024-10-11 11:57:26.963780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.461 11:57:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.723 nvme0n1 00:23:42.723 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.723 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:42.723 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.723 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.723 [ 00:23:42.723 { 00:23:42.723 "name": "nvme0n1", 00:23:42.723 "aliases": [ 00:23:42.723 "eff9d4c1-f3a8-41e8-b9a9-7c6e4800c131" 00:23:42.723 ], 00:23:42.723 "product_name": "NVMe disk", 00:23:42.723 "block_size": 512, 00:23:42.723 "num_blocks": 2097152, 00:23:42.723 "uuid": "eff9d4c1-f3a8-41e8-b9a9-7c6e4800c131", 00:23:42.723 "numa_id": 0, 00:23:42.723 "assigned_rate_limits": { 00:23:42.723 "rw_ios_per_sec": 0, 00:23:42.723 "rw_mbytes_per_sec": 0, 00:23:42.723 "r_mbytes_per_sec": 0, 00:23:42.723 "w_mbytes_per_sec": 0 00:23:42.723 }, 00:23:42.723 "claimed": false, 00:23:42.723 "zoned": false, 00:23:42.723 "supported_io_types": { 00:23:42.723 "read": true, 00:23:42.723 "write": true, 00:23:42.723 "unmap": false, 00:23:42.723 "flush": true, 00:23:42.723 "reset": true, 00:23:42.723 "nvme_admin": true, 00:23:42.723 "nvme_io": true, 00:23:42.723 "nvme_io_md": false, 00:23:42.723 "write_zeroes": true, 00:23:42.723 "zcopy": false, 00:23:42.723 "get_zone_info": false, 00:23:42.723 "zone_management": false, 00:23:42.723 "zone_append": false, 00:23:42.723 "compare": true, 00:23:42.723 "compare_and_write": true, 00:23:42.723 "abort": true, 00:23:42.723 "seek_hole": false, 00:23:42.723 "seek_data": false, 00:23:42.723 "copy": true, 00:23:42.723 "nvme_iov_md": false 00:23:42.723 }, 00:23:42.723 "memory_domains": [ 00:23:42.723 { 00:23:42.723 "dma_device_id": "system", 00:23:42.723 "dma_device_type": 1 00:23:42.723 } 00:23:42.723 ], 00:23:42.723 "driver_specific": { 00:23:42.723 "nvme": [ 00:23:42.723 { 00:23:42.723 "trid": { 00:23:42.723 "trtype": "TCP", 00:23:42.723 "adrfam": "IPv4", 00:23:42.723 "traddr": "10.0.0.2", 00:23:42.723 "trsvcid": "4420", 00:23:42.723 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:42.723 }, 00:23:42.723 "ctrlr_data": { 00:23:42.723 "cntlid": 1, 00:23:42.723 "vendor_id": "0x8086", 00:23:42.723 "model_number": "SPDK bdev Controller", 00:23:42.723 "serial_number": "00000000000000000000", 00:23:42.723 "firmware_revision": "25.01", 00:23:42.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.723 "oacs": { 00:23:42.723 "security": 0, 00:23:42.723 "format": 0, 00:23:42.723 "firmware": 0, 00:23:42.723 "ns_manage": 0 00:23:42.723 }, 00:23:42.723 "multi_ctrlr": true, 00:23:42.723 "ana_reporting": false 00:23:42.723 }, 00:23:42.723 "vs": { 00:23:42.724 "nvme_version": "1.3" 00:23:42.724 }, 00:23:42.724 "ns_data": { 00:23:42.724 "id": 1, 00:23:42.724 "can_share": true 00:23:42.724 } 00:23:42.724 } 00:23:42.724 ], 00:23:42.724 "mp_policy": "active_passive" 00:23:42.724 } 00:23:42.724 } 00:23:42.724 ] 00:23:42.724 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.724 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:42.724 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.724 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.724 [2024-10-11 11:57:27.241536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:42.724 [2024-10-11 11:57:27.241628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2cb40 (9): Bad file descriptor 00:23:42.986 [2024-10-11 11:57:27.373777] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:42.986 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.986 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:42.986 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.986 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.986 [ 00:23:42.986 { 00:23:42.986 "name": "nvme0n1", 00:23:42.986 "aliases": [ 00:23:42.986 "eff9d4c1-f3a8-41e8-b9a9-7c6e4800c131" 00:23:42.986 ], 00:23:42.986 "product_name": "NVMe disk", 00:23:42.986 "block_size": 512, 00:23:42.986 "num_blocks": 2097152, 00:23:42.986 "uuid": "eff9d4c1-f3a8-41e8-b9a9-7c6e4800c131", 00:23:42.986 "numa_id": 0, 00:23:42.986 "assigned_rate_limits": { 00:23:42.986 "rw_ios_per_sec": 0, 00:23:42.986 "rw_mbytes_per_sec": 0, 00:23:42.986 "r_mbytes_per_sec": 0, 00:23:42.986 "w_mbytes_per_sec": 0 00:23:42.986 }, 00:23:42.986 "claimed": false, 00:23:42.986 "zoned": false, 00:23:42.986 "supported_io_types": { 00:23:42.986 "read": true, 00:23:42.986 "write": true, 00:23:42.986 "unmap": false, 00:23:42.986 "flush": true, 00:23:42.986 "reset": true, 00:23:42.986 "nvme_admin": true, 00:23:42.986 "nvme_io": true, 00:23:42.986 "nvme_io_md": false, 00:23:42.986 "write_zeroes": true, 00:23:42.986 "zcopy": false, 00:23:42.986 "get_zone_info": false, 00:23:42.986 "zone_management": false, 00:23:42.986 "zone_append": false, 00:23:42.986 "compare": true, 00:23:42.986 "compare_and_write": true, 00:23:42.986 "abort": true, 00:23:42.986 "seek_hole": false, 00:23:42.986 "seek_data": false, 00:23:42.986 "copy": true, 00:23:42.986 "nvme_iov_md": false 00:23:42.986 }, 00:23:42.987 "memory_domains": [ 00:23:42.987 { 00:23:42.987 "dma_device_id": "system", 00:23:42.987 "dma_device_type": 1 00:23:42.987 } 00:23:42.987 ], 00:23:42.987 "driver_specific": { 00:23:42.987 "nvme": [ 00:23:42.987 { 00:23:42.987 "trid": { 00:23:42.987 "trtype": "TCP", 00:23:42.987 "adrfam": "IPv4", 00:23:42.987 "traddr": "10.0.0.2", 00:23:42.987 "trsvcid": "4420", 00:23:42.987 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:42.987 }, 00:23:42.987 "ctrlr_data": { 00:23:42.987 "cntlid": 2, 00:23:42.987 "vendor_id": "0x8086", 00:23:42.987 "model_number": "SPDK bdev Controller", 00:23:42.987 "serial_number": "00000000000000000000", 00:23:42.987 "firmware_revision": "25.01", 00:23:42.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.987 "oacs": { 00:23:42.987 "security": 0, 00:23:42.987 "format": 0, 00:23:42.987 "firmware": 0, 00:23:42.987 "ns_manage": 0 00:23:42.987 }, 00:23:42.987 "multi_ctrlr": true, 00:23:42.987 "ana_reporting": false 00:23:42.987 }, 00:23:42.987 "vs": { 00:23:42.987 "nvme_version": "1.3" 00:23:42.987 }, 00:23:42.987 "ns_data": { 00:23:42.987 "id": 1, 00:23:42.987 "can_share": true 00:23:42.987 } 00:23:42.987 } 00:23:42.987 ], 00:23:42.987 "mp_policy": "active_passive" 00:23:42.987 } 00:23:42.987 } 00:23:42.987 ] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.hVDbkI0yxh 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.hVDbkI0yxh 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.hVDbkI0yxh 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.987 [2024-10-11 11:57:27.462243] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:42.987 [2024-10-11 11:57:27.462396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.987 [2024-10-11 11:57:27.486323] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.987 nvme0n1 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.987 [ 00:23:42.987 { 00:23:42.987 "name": "nvme0n1", 00:23:42.987 "aliases": [ 00:23:42.987 "eff9d4c1-f3a8-41e8-b9a9-7c6e4800c131" 00:23:42.987 ], 00:23:42.987 "product_name": "NVMe disk", 00:23:42.987 "block_size": 512, 00:23:42.987 "num_blocks": 2097152, 00:23:42.987 "uuid": "eff9d4c1-f3a8-41e8-b9a9-7c6e4800c131", 00:23:42.987 "numa_id": 0, 00:23:42.987 "assigned_rate_limits": { 00:23:42.987 "rw_ios_per_sec": 0, 00:23:42.987 "rw_mbytes_per_sec": 0, 00:23:42.987 "r_mbytes_per_sec": 0, 00:23:42.987 "w_mbytes_per_sec": 0 00:23:42.987 }, 00:23:42.987 "claimed": false, 00:23:42.987 "zoned": false, 00:23:42.987 "supported_io_types": { 00:23:42.987 "read": true, 00:23:42.987 "write": true, 00:23:42.987 "unmap": false, 00:23:42.987 "flush": true, 00:23:42.987 "reset": true, 00:23:42.987 "nvme_admin": true, 00:23:42.987 "nvme_io": true, 00:23:42.987 "nvme_io_md": false, 00:23:42.987 "write_zeroes": true, 00:23:42.987 "zcopy": false, 00:23:42.987 "get_zone_info": false, 00:23:42.987 "zone_management": false, 00:23:42.987 "zone_append": false, 00:23:42.987 "compare": true, 00:23:42.987 "compare_and_write": true, 00:23:42.987 "abort": true, 00:23:42.987 "seek_hole": false, 00:23:42.987 "seek_data": false, 00:23:42.987 "copy": true, 00:23:42.987 "nvme_iov_md": false 00:23:42.987 }, 00:23:42.987 "memory_domains": [ 00:23:42.987 { 00:23:42.987 "dma_device_id": "system", 00:23:42.987 "dma_device_type": 1 00:23:42.987 } 00:23:42.987 ], 00:23:42.987 "driver_specific": { 00:23:42.987 "nvme": [ 00:23:42.987 { 00:23:42.987 "trid": { 00:23:42.987 "trtype": "TCP", 00:23:42.987 "adrfam": "IPv4", 00:23:42.987 "traddr": "10.0.0.2", 00:23:42.987 "trsvcid": "4421", 00:23:42.987 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:42.987 }, 00:23:42.987 "ctrlr_data": { 00:23:42.987 "cntlid": 3, 00:23:42.987 "vendor_id": "0x8086", 00:23:42.987 "model_number": "SPDK bdev Controller", 00:23:42.987 "serial_number": "00000000000000000000", 00:23:42.987 "firmware_revision": "25.01", 00:23:42.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.987 "oacs": { 00:23:42.987 "security": 0, 00:23:42.987 "format": 0, 00:23:42.987 "firmware": 0, 00:23:42.987 "ns_manage": 0 00:23:42.987 }, 00:23:42.987 "multi_ctrlr": true, 00:23:42.987 "ana_reporting": false 00:23:42.987 }, 00:23:42.987 "vs": { 00:23:42.987 "nvme_version": "1.3" 00:23:42.987 }, 00:23:42.987 "ns_data": { 00:23:42.987 "id": 1, 00:23:42.987 "can_share": true 00:23:42.987 } 00:23:42.987 } 00:23:42.987 ], 00:23:42.987 "mp_policy": "active_passive" 00:23:42.987 } 00:23:42.987 } 00:23:42.987 ] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.hVDbkI0yxh 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:42.987 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:42.988 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.988 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:42.988 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.988 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.249 rmmod nvme_tcp 00:23:43.249 rmmod nvme_fabrics 00:23:43.249 rmmod nvme_keyring 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1104655 ']' 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1104655 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1104655 ']' 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1104655 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1104655 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1104655' 00:23:43.249 killing process with pid 1104655 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1104655 00:23:43.249 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1104655 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.510 11:57:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.426 11:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.426 00:23:45.426 real 0m11.802s 00:23:45.426 user 0m4.319s 00:23:45.426 sys 0m6.079s 00:23:45.426 11:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:45.426 11:57:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:45.426 ************************************ 00:23:45.426 END TEST nvmf_async_init 00:23:45.426 ************************************ 00:23:45.426 11:57:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:45.426 11:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:45.426 11:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:45.426 11:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.689 ************************************ 00:23:45.689 START TEST dma 00:23:45.689 ************************************ 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:45.689 * Looking for test storage... 00:23:45.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.689 --rc genhtml_branch_coverage=1 00:23:45.689 --rc genhtml_function_coverage=1 00:23:45.689 --rc genhtml_legend=1 00:23:45.689 --rc geninfo_all_blocks=1 00:23:45.689 --rc geninfo_unexecuted_blocks=1 00:23:45.689 00:23:45.689 ' 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.689 --rc genhtml_branch_coverage=1 00:23:45.689 --rc genhtml_function_coverage=1 00:23:45.689 --rc genhtml_legend=1 00:23:45.689 --rc geninfo_all_blocks=1 00:23:45.689 --rc geninfo_unexecuted_blocks=1 00:23:45.689 00:23:45.689 ' 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.689 --rc genhtml_branch_coverage=1 00:23:45.689 --rc genhtml_function_coverage=1 00:23:45.689 --rc genhtml_legend=1 00:23:45.689 --rc geninfo_all_blocks=1 00:23:45.689 --rc geninfo_unexecuted_blocks=1 00:23:45.689 00:23:45.689 ' 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:45.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.689 --rc genhtml_branch_coverage=1 00:23:45.689 --rc genhtml_function_coverage=1 00:23:45.689 --rc genhtml_legend=1 00:23:45.689 --rc geninfo_all_blocks=1 00:23:45.689 --rc geninfo_unexecuted_blocks=1 00:23:45.689 00:23:45.689 ' 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.689 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.690 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.690 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.690 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.690 11:57:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.690 11:57:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:45.690 11:57:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:45.690 00:23:45.690 real 0m0.238s 00:23:45.690 user 0m0.134s 00:23:45.690 sys 0m0.118s 00:23:45.690 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:45.690 11:57:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:45.690 ************************************ 00:23:45.690 END TEST dma 00:23:45.690 ************************************ 00:23:45.950 11:57:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:45.950 11:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:45.950 11:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:45.950 11:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.950 ************************************ 00:23:45.950 START TEST nvmf_identify 00:23:45.950 ************************************ 00:23:45.950 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:45.950 * Looking for test storage... 00:23:45.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:45.950 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:45.950 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:45.950 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:46.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.212 --rc genhtml_branch_coverage=1 00:23:46.212 --rc genhtml_function_coverage=1 00:23:46.212 --rc genhtml_legend=1 00:23:46.212 --rc geninfo_all_blocks=1 00:23:46.212 --rc geninfo_unexecuted_blocks=1 00:23:46.212 00:23:46.212 ' 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:46.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.212 --rc genhtml_branch_coverage=1 00:23:46.212 --rc genhtml_function_coverage=1 00:23:46.212 --rc genhtml_legend=1 00:23:46.212 --rc geninfo_all_blocks=1 00:23:46.212 --rc geninfo_unexecuted_blocks=1 00:23:46.212 00:23:46.212 ' 00:23:46.212 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:46.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.212 --rc genhtml_branch_coverage=1 00:23:46.212 --rc genhtml_function_coverage=1 00:23:46.212 --rc genhtml_legend=1 00:23:46.212 --rc geninfo_all_blocks=1 00:23:46.212 --rc geninfo_unexecuted_blocks=1 00:23:46.212 00:23:46.212 ' 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:46.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.213 --rc genhtml_branch_coverage=1 00:23:46.213 --rc genhtml_function_coverage=1 00:23:46.213 --rc genhtml_legend=1 00:23:46.213 --rc geninfo_all_blocks=1 00:23:46.213 --rc geninfo_unexecuted_blocks=1 00:23:46.213 00:23:46.213 ' 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.213 11:57:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:54.362 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:54.362 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:54.362 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:54.362 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.362 11:57:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.362 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.362 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:54.362 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.362 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.362 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.362 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:54.362 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:54.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:23:54.362 00:23:54.362 --- 10.0.0.2 ping statistics --- 00:23:54.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.362 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:23:54.362 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:23:54.362 00:23:54.362 --- 10.0.0.1 ping statistics --- 00:23:54.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.362 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1109210 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1109210 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1109210 ']' 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:54.363 11:57:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.363 [2024-10-11 11:57:38.249160] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:54.363 [2024-10-11 11:57:38.249223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.363 [2024-10-11 11:57:38.339415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:54.363 [2024-10-11 11:57:38.394104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.363 [2024-10-11 11:57:38.394162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.363 [2024-10-11 11:57:38.394171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.363 [2024-10-11 11:57:38.394178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.363 [2024-10-11 11:57:38.394184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.363 [2024-10-11 11:57:38.396295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.363 [2024-10-11 11:57:38.396453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.363 [2024-10-11 11:57:38.396614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.363 [2024-10-11 11:57:38.396616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.624 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:54.624 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:54.624 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 [2024-10-11 11:57:39.084968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 Malloc0 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 [2024-10-11 11:57:39.203784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:54.625 [ 00:23:54.625 { 00:23:54.625 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:54.625 "subtype": "Discovery", 00:23:54.625 "listen_addresses": [ 00:23:54.625 { 00:23:54.625 "trtype": "TCP", 00:23:54.625 "adrfam": "IPv4", 00:23:54.625 "traddr": "10.0.0.2", 00:23:54.625 "trsvcid": "4420" 00:23:54.625 } 00:23:54.625 ], 00:23:54.625 "allow_any_host": true, 00:23:54.625 "hosts": [] 00:23:54.625 }, 00:23:54.625 { 00:23:54.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.625 "subtype": "NVMe", 00:23:54.625 "listen_addresses": [ 00:23:54.625 { 00:23:54.625 "trtype": "TCP", 00:23:54.625 "adrfam": "IPv4", 00:23:54.625 "traddr": "10.0.0.2", 00:23:54.625 "trsvcid": "4420" 00:23:54.625 } 00:23:54.625 ], 00:23:54.625 "allow_any_host": true, 00:23:54.625 "hosts": [], 00:23:54.625 "serial_number": "SPDK00000000000001", 00:23:54.625 "model_number": "SPDK bdev Controller", 00:23:54.625 "max_namespaces": 32, 00:23:54.625 "min_cntlid": 1, 00:23:54.625 "max_cntlid": 65519, 00:23:54.625 "namespaces": [ 00:23:54.625 { 00:23:54.625 "nsid": 1, 00:23:54.625 "bdev_name": "Malloc0", 00:23:54.625 "name": "Malloc0", 00:23:54.625 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:54.625 "eui64": "ABCDEF0123456789", 00:23:54.625 "uuid": "d4907451-b09e-45a4-8c07-dc76b6f85d1b" 00:23:54.625 } 00:23:54.625 ] 00:23:54.625 } 00:23:54.625 ] 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.625 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:54.888 [2024-10-11 11:57:39.268048] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:54.888 [2024-10-11 11:57:39.268096] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109541 ] 00:23:54.888 [2024-10-11 11:57:39.307858] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:54.888 [2024-10-11 11:57:39.307926] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:54.888 [2024-10-11 11:57:39.307932] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:54.888 [2024-10-11 11:57:39.307952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:54.888 [2024-10-11 11:57:39.307964] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:54.888 [2024-10-11 11:57:39.308827] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:54.888 [2024-10-11 11:57:39.308879] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11c4190 0 00:23:54.888 [2024-10-11 11:57:39.322687] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:54.888 [2024-10-11 11:57:39.322712] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:54.888 [2024-10-11 11:57:39.322718] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:54.888 [2024-10-11 11:57:39.322722] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:54.888 [2024-10-11 11:57:39.322762] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.888 [2024-10-11 11:57:39.322769] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.888 [2024-10-11 11:57:39.322773] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.888 [2024-10-11 11:57:39.322790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:54.888 [2024-10-11 11:57:39.322815] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.888 [2024-10-11 11:57:39.329726] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.888 [2024-10-11 11:57:39.329737] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.888 [2024-10-11 11:57:39.329741] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.888 [2024-10-11 11:57:39.329746] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.888 [2024-10-11 11:57:39.329761] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:54.888 [2024-10-11 11:57:39.329770] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:54.889 [2024-10-11 11:57:39.329775] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:54.889 [2024-10-11 11:57:39.329792] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.329796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.329800] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.329809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.889 [2024-10-11 11:57:39.329827] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.889 [2024-10-11 11:57:39.330035] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.889 [2024-10-11 11:57:39.330042] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.889 [2024-10-11 11:57:39.330046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.889 [2024-10-11 11:57:39.330055] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:54.889 [2024-10-11 11:57:39.330063] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:54.889 [2024-10-11 11:57:39.330070] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330074] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330077] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.330084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.889 [2024-10-11 11:57:39.330096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.889 [2024-10-11 11:57:39.330264] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.889 [2024-10-11 11:57:39.330270] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.889 [2024-10-11 11:57:39.330274] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330278] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.889 [2024-10-11 11:57:39.330283] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:54.889 [2024-10-11 11:57:39.330296] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:54.889 [2024-10-11 11:57:39.330303] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330307] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330311] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.330317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.889 [2024-10-11 11:57:39.330328] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.889 [2024-10-11 11:57:39.330538] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.889 [2024-10-11 11:57:39.330544] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.889 [2024-10-11 11:57:39.330548] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.889 [2024-10-11 11:57:39.330557] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:54.889 [2024-10-11 11:57:39.330567] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330571] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330575] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.330581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.889 [2024-10-11 11:57:39.330592] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.889 [2024-10-11 11:57:39.330793] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.889 [2024-10-11 11:57:39.330799] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.889 [2024-10-11 11:57:39.330803] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330807] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.889 [2024-10-11 11:57:39.330811] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:54.889 [2024-10-11 11:57:39.330816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:54.889 [2024-10-11 11:57:39.330824] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:54.889 [2024-10-11 11:57:39.330930] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:54.889 [2024-10-11 11:57:39.330935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:54.889 [2024-10-11 11:57:39.330944] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330948] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.330952] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.330958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.889 [2024-10-11 11:57:39.330969] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.889 [2024-10-11 11:57:39.331158] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.889 [2024-10-11 11:57:39.331164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.889 [2024-10-11 11:57:39.331170] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.331174] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.889 [2024-10-11 11:57:39.331179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:54.889 [2024-10-11 11:57:39.331188] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.331192] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.331196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.331203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.889 [2024-10-11 11:57:39.331213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.889 [2024-10-11 11:57:39.331387] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.889 [2024-10-11 11:57:39.331394] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.889 [2024-10-11 11:57:39.331397] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.331401] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.889 [2024-10-11 11:57:39.331406] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:54.889 [2024-10-11 11:57:39.331410] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:54.889 [2024-10-11 11:57:39.331418] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:54.889 [2024-10-11 11:57:39.331426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:54.889 [2024-10-11 11:57:39.331437] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.331441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.331448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.889 [2024-10-11 11:57:39.331459] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.889 [2024-10-11 11:57:39.331710] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.889 [2024-10-11 11:57:39.331717] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.889 [2024-10-11 11:57:39.331720] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.331725] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c4190): datao=0, datal=4096, cccid=0 00:23:54.889 [2024-10-11 11:57:39.331730] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12308c0) on tqpair(0x11c4190): expected_datao=0, payload_size=4096 00:23:54.889 [2024-10-11 11:57:39.331734] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.331750] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.331755] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.372845] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.889 [2024-10-11 11:57:39.372856] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.889 [2024-10-11 11:57:39.372860] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.372864] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.889 [2024-10-11 11:57:39.372873] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:54.889 [2024-10-11 11:57:39.372883] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:54.889 [2024-10-11 11:57:39.372887] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:54.889 [2024-10-11 11:57:39.372893] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:54.889 [2024-10-11 11:57:39.372897] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:54.889 [2024-10-11 11:57:39.372903] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:54.889 [2024-10-11 11:57:39.372912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:54.889 [2024-10-11 11:57:39.372920] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.372924] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.372928] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.372937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:54.889 [2024-10-11 11:57:39.372950] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.889 [2024-10-11 11:57:39.373126] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.889 [2024-10-11 11:57:39.373133] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.889 [2024-10-11 11:57:39.373136] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.373140] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.889 [2024-10-11 11:57:39.373148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.373152] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.373156] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.373162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.889 [2024-10-11 11:57:39.373169] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.373173] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.373176] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.373182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.889 [2024-10-11 11:57:39.373188] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.373192] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.373196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11c4190) 00:23:54.889 [2024-10-11 11:57:39.373201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.889 [2024-10-11 11:57:39.373208] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.889 [2024-10-11 11:57:39.373212] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.373215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c4190) 00:23:54.890 [2024-10-11 11:57:39.373221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.890 [2024-10-11 11:57:39.373226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:54.890 [2024-10-11 11:57:39.373239] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:54.890 [2024-10-11 11:57:39.373248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.373252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c4190) 00:23:54.890 [2024-10-11 11:57:39.373259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.890 [2024-10-11 11:57:39.373272] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12308c0, cid 0, qid 0 00:23:54.890 [2024-10-11 11:57:39.373277] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230a40, cid 1, qid 0 00:23:54.890 [2024-10-11 11:57:39.373282] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230bc0, cid 2, qid 0 00:23:54.890 [2024-10-11 11:57:39.373287] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230d40, cid 3, qid 0 00:23:54.890 [2024-10-11 11:57:39.373292] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230ec0, cid 4, qid 0 00:23:54.890 [2024-10-11 11:57:39.373558] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.890 [2024-10-11 11:57:39.373564] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.890 [2024-10-11 11:57:39.373568] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.373572] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230ec0) on tqpair=0x11c4190 00:23:54.890 [2024-10-11 11:57:39.373577] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:54.890 [2024-10-11 11:57:39.373582] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:54.890 [2024-10-11 11:57:39.373594] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.373598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c4190) 00:23:54.890 [2024-10-11 11:57:39.373605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.890 [2024-10-11 11:57:39.373615] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230ec0, cid 4, qid 0 00:23:54.890 [2024-10-11 11:57:39.373803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.890 [2024-10-11 11:57:39.373810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.890 [2024-10-11 11:57:39.373813] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.373817] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c4190): datao=0, datal=4096, cccid=4 00:23:54.890 [2024-10-11 11:57:39.373822] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1230ec0) on tqpair(0x11c4190): expected_datao=0, payload_size=4096 00:23:54.890 [2024-10-11 11:57:39.373827] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.373838] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.373843] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374022] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.890 [2024-10-11 11:57:39.374028] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.890 [2024-10-11 11:57:39.374032] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374036] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230ec0) on tqpair=0x11c4190 00:23:54.890 [2024-10-11 11:57:39.374049] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:54.890 [2024-10-11 11:57:39.374082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374087] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c4190) 00:23:54.890 [2024-10-11 11:57:39.374094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.890 [2024-10-11 11:57:39.374106] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374110] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374114] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11c4190) 00:23:54.890 [2024-10-11 11:57:39.374120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.890 [2024-10-11 11:57:39.374132] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230ec0, cid 4, qid 0 00:23:54.890 [2024-10-11 11:57:39.374137] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1231040, cid 5, qid 0 00:23:54.890 [2024-10-11 11:57:39.374413] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.890 [2024-10-11 11:57:39.374419] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.890 [2024-10-11 11:57:39.374423] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374426] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c4190): datao=0, datal=1024, cccid=4 00:23:54.890 [2024-10-11 11:57:39.374431] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1230ec0) on tqpair(0x11c4190): expected_datao=0, payload_size=1024 00:23:54.890 [2024-10-11 11:57:39.374435] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374442] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374446] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374452] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.890 [2024-10-11 11:57:39.374458] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.890 [2024-10-11 11:57:39.374461] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.374465] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1231040) on tqpair=0x11c4190 00:23:54.890 [2024-10-11 11:57:39.418677] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.890 [2024-10-11 11:57:39.418688] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.890 [2024-10-11 11:57:39.418692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.418696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230ec0) on tqpair=0x11c4190 00:23:54.890 [2024-10-11 11:57:39.418713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.418718] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c4190) 00:23:54.890 [2024-10-11 11:57:39.418725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.890 [2024-10-11 11:57:39.418742] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230ec0, cid 4, qid 0 00:23:54.890 [2024-10-11 11:57:39.419041] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.890 [2024-10-11 11:57:39.419048] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.890 [2024-10-11 11:57:39.419052] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.419055] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c4190): datao=0, datal=3072, cccid=4 00:23:54.890 [2024-10-11 11:57:39.419060] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1230ec0) on tqpair(0x11c4190): expected_datao=0, payload_size=3072 00:23:54.890 [2024-10-11 11:57:39.419064] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.419071] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.419075] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.459823] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.890 [2024-10-11 11:57:39.459833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.890 [2024-10-11 11:57:39.459837] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.459845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230ec0) on tqpair=0x11c4190 00:23:54.890 [2024-10-11 11:57:39.459855] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.459859] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c4190) 00:23:54.890 [2024-10-11 11:57:39.459865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.890 [2024-10-11 11:57:39.459881] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230ec0, cid 4, qid 0 00:23:54.890 [2024-10-11 11:57:39.460090] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:54.890 [2024-10-11 11:57:39.460096] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:54.890 [2024-10-11 11:57:39.460099] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.460103] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c4190): datao=0, datal=8, cccid=4 00:23:54.890 [2024-10-11 11:57:39.460108] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1230ec0) on tqpair(0x11c4190): expected_datao=0, payload_size=8 00:23:54.890 [2024-10-11 11:57:39.460112] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.460119] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.460122] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.500830] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.890 [2024-10-11 11:57:39.500840] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.890 [2024-10-11 11:57:39.500844] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.890 [2024-10-11 11:57:39.500848] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230ec0) on tqpair=0x11c4190 00:23:54.890 ===================================================== 00:23:54.890 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:54.890 ===================================================== 00:23:54.890 Controller Capabilities/Features 00:23:54.890 ================================ 00:23:54.890 Vendor ID: 0000 00:23:54.890 Subsystem Vendor ID: 0000 00:23:54.890 Serial Number: .................... 00:23:54.890 Model Number: ........................................ 00:23:54.890 Firmware Version: 25.01 00:23:54.890 Recommended Arb Burst: 0 00:23:54.890 IEEE OUI Identifier: 00 00 00 00:23:54.890 Multi-path I/O 00:23:54.890 May have multiple subsystem ports: No 00:23:54.890 May have multiple controllers: No 00:23:54.890 Associated with SR-IOV VF: No 00:23:54.890 Max Data Transfer Size: 131072 00:23:54.890 Max Number of Namespaces: 0 00:23:54.890 Max Number of I/O Queues: 1024 00:23:54.890 NVMe Specification Version (VS): 1.3 00:23:54.890 NVMe Specification Version (Identify): 1.3 00:23:54.890 Maximum Queue Entries: 128 00:23:54.890 Contiguous Queues Required: Yes 00:23:54.890 Arbitration Mechanisms Supported 00:23:54.890 Weighted Round Robin: Not Supported 00:23:54.890 Vendor Specific: Not Supported 00:23:54.890 Reset Timeout: 15000 ms 00:23:54.890 Doorbell Stride: 4 bytes 00:23:54.890 NVM Subsystem Reset: Not Supported 00:23:54.890 Command Sets Supported 00:23:54.890 NVM Command Set: Supported 00:23:54.890 Boot Partition: Not Supported 00:23:54.890 Memory Page Size Minimum: 4096 bytes 00:23:54.890 Memory Page Size Maximum: 4096 bytes 00:23:54.890 Persistent Memory Region: Not Supported 00:23:54.890 Optional Asynchronous Events Supported 00:23:54.890 Namespace Attribute Notices: Not Supported 00:23:54.890 Firmware Activation Notices: Not Supported 00:23:54.890 ANA Change Notices: Not Supported 00:23:54.890 PLE Aggregate Log Change Notices: Not Supported 00:23:54.890 LBA Status Info Alert Notices: Not Supported 00:23:54.890 EGE Aggregate Log Change Notices: Not Supported 00:23:54.890 Normal NVM Subsystem Shutdown event: Not Supported 00:23:54.890 Zone Descriptor Change Notices: Not Supported 00:23:54.890 Discovery Log Change Notices: Supported 00:23:54.890 Controller Attributes 00:23:54.890 128-bit Host Identifier: Not Supported 00:23:54.890 Non-Operational Permissive Mode: Not Supported 00:23:54.890 NVM Sets: Not Supported 00:23:54.890 Read Recovery Levels: Not Supported 00:23:54.890 Endurance Groups: Not Supported 00:23:54.890 Predictable Latency Mode: Not Supported 00:23:54.890 Traffic Based Keep ALive: Not Supported 00:23:54.890 Namespace Granularity: Not Supported 00:23:54.890 SQ Associations: Not Supported 00:23:54.890 UUID List: Not Supported 00:23:54.890 Multi-Domain Subsystem: Not Supported 00:23:54.890 Fixed Capacity Management: Not Supported 00:23:54.891 Variable Capacity Management: Not Supported 00:23:54.891 Delete Endurance Group: Not Supported 00:23:54.891 Delete NVM Set: Not Supported 00:23:54.891 Extended LBA Formats Supported: Not Supported 00:23:54.891 Flexible Data Placement Supported: Not Supported 00:23:54.891 00:23:54.891 Controller Memory Buffer Support 00:23:54.891 ================================ 00:23:54.891 Supported: No 00:23:54.891 00:23:54.891 Persistent Memory Region Support 00:23:54.891 ================================ 00:23:54.891 Supported: No 00:23:54.891 00:23:54.891 Admin Command Set Attributes 00:23:54.891 ============================ 00:23:54.891 Security Send/Receive: Not Supported 00:23:54.891 Format NVM: Not Supported 00:23:54.891 Firmware Activate/Download: Not Supported 00:23:54.891 Namespace Management: Not Supported 00:23:54.891 Device Self-Test: Not Supported 00:23:54.891 Directives: Not Supported 00:23:54.891 NVMe-MI: Not Supported 00:23:54.891 Virtualization Management: Not Supported 00:23:54.891 Doorbell Buffer Config: Not Supported 00:23:54.891 Get LBA Status Capability: Not Supported 00:23:54.891 Command & Feature Lockdown Capability: Not Supported 00:23:54.891 Abort Command Limit: 1 00:23:54.891 Async Event Request Limit: 4 00:23:54.891 Number of Firmware Slots: N/A 00:23:54.891 Firmware Slot 1 Read-Only: N/A 00:23:54.891 Firmware Activation Without Reset: N/A 00:23:54.891 Multiple Update Detection Support: N/A 00:23:54.891 Firmware Update Granularity: No Information Provided 00:23:54.891 Per-Namespace SMART Log: No 00:23:54.891 Asymmetric Namespace Access Log Page: Not Supported 00:23:54.891 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:54.891 Command Effects Log Page: Not Supported 00:23:54.891 Get Log Page Extended Data: Supported 00:23:54.891 Telemetry Log Pages: Not Supported 00:23:54.891 Persistent Event Log Pages: Not Supported 00:23:54.891 Supported Log Pages Log Page: May Support 00:23:54.891 Commands Supported & Effects Log Page: Not Supported 00:23:54.891 Feature Identifiers & Effects Log Page:May Support 00:23:54.891 NVMe-MI Commands & Effects Log Page: May Support 00:23:54.891 Data Area 4 for Telemetry Log: Not Supported 00:23:54.891 Error Log Page Entries Supported: 128 00:23:54.891 Keep Alive: Not Supported 00:23:54.891 00:23:54.891 NVM Command Set Attributes 00:23:54.891 ========================== 00:23:54.891 Submission Queue Entry Size 00:23:54.891 Max: 1 00:23:54.891 Min: 1 00:23:54.891 Completion Queue Entry Size 00:23:54.891 Max: 1 00:23:54.891 Min: 1 00:23:54.891 Number of Namespaces: 0 00:23:54.891 Compare Command: Not Supported 00:23:54.891 Write Uncorrectable Command: Not Supported 00:23:54.891 Dataset Management Command: Not Supported 00:23:54.891 Write Zeroes Command: Not Supported 00:23:54.891 Set Features Save Field: Not Supported 00:23:54.891 Reservations: Not Supported 00:23:54.891 Timestamp: Not Supported 00:23:54.891 Copy: Not Supported 00:23:54.891 Volatile Write Cache: Not Present 00:23:54.891 Atomic Write Unit (Normal): 1 00:23:54.891 Atomic Write Unit (PFail): 1 00:23:54.891 Atomic Compare & Write Unit: 1 00:23:54.891 Fused Compare & Write: Supported 00:23:54.891 Scatter-Gather List 00:23:54.891 SGL Command Set: Supported 00:23:54.891 SGL Keyed: Supported 00:23:54.891 SGL Bit Bucket Descriptor: Not Supported 00:23:54.891 SGL Metadata Pointer: Not Supported 00:23:54.891 Oversized SGL: Not Supported 00:23:54.891 SGL Metadata Address: Not Supported 00:23:54.891 SGL Offset: Supported 00:23:54.891 Transport SGL Data Block: Not Supported 00:23:54.891 Replay Protected Memory Block: Not Supported 00:23:54.891 00:23:54.891 Firmware Slot Information 00:23:54.891 ========================= 00:23:54.891 Active slot: 0 00:23:54.891 00:23:54.891 00:23:54.891 Error Log 00:23:54.891 ========= 00:23:54.891 00:23:54.891 Active Namespaces 00:23:54.891 ================= 00:23:54.891 Discovery Log Page 00:23:54.891 ================== 00:23:54.891 Generation Counter: 2 00:23:54.891 Number of Records: 2 00:23:54.891 Record Format: 0 00:23:54.891 00:23:54.891 Discovery Log Entry 0 00:23:54.891 ---------------------- 00:23:54.891 Transport Type: 3 (TCP) 00:23:54.891 Address Family: 1 (IPv4) 00:23:54.891 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:54.891 Entry Flags: 00:23:54.891 Duplicate Returned Information: 1 00:23:54.891 Explicit Persistent Connection Support for Discovery: 1 00:23:54.891 Transport Requirements: 00:23:54.891 Secure Channel: Not Required 00:23:54.891 Port ID: 0 (0x0000) 00:23:54.891 Controller ID: 65535 (0xffff) 00:23:54.891 Admin Max SQ Size: 128 00:23:54.891 Transport Service Identifier: 4420 00:23:54.891 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:54.891 Transport Address: 10.0.0.2 00:23:54.891 Discovery Log Entry 1 00:23:54.891 ---------------------- 00:23:54.891 Transport Type: 3 (TCP) 00:23:54.891 Address Family: 1 (IPv4) 00:23:54.891 Subsystem Type: 2 (NVM Subsystem) 00:23:54.891 Entry Flags: 00:23:54.891 Duplicate Returned Information: 0 00:23:54.891 Explicit Persistent Connection Support for Discovery: 0 00:23:54.891 Transport Requirements: 00:23:54.891 Secure Channel: Not Required 00:23:54.891 Port ID: 0 (0x0000) 00:23:54.891 Controller ID: 65535 (0xffff) 00:23:54.891 Admin Max SQ Size: 128 00:23:54.891 Transport Service Identifier: 4420 00:23:54.891 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:54.891 Transport Address: 10.0.0.2 [2024-10-11 11:57:39.500959] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:54.891 [2024-10-11 11:57:39.500972] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12308c0) on tqpair=0x11c4190 00:23:54.891 [2024-10-11 11:57:39.500979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.891 [2024-10-11 11:57:39.500985] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230a40) on tqpair=0x11c4190 00:23:54.891 [2024-10-11 11:57:39.500990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.891 [2024-10-11 11:57:39.500995] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230bc0) on tqpair=0x11c4190 00:23:54.891 [2024-10-11 11:57:39.500999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.891 [2024-10-11 11:57:39.501004] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230d40) on tqpair=0x11c4190 00:23:54.891 [2024-10-11 11:57:39.501009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.891 [2024-10-11 11:57:39.501019] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501023] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501026] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c4190) 00:23:54.891 [2024-10-11 11:57:39.501034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.891 [2024-10-11 11:57:39.501051] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230d40, cid 3, qid 0 00:23:54.891 [2024-10-11 11:57:39.501293] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.891 [2024-10-11 11:57:39.501300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.891 [2024-10-11 11:57:39.501304] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230d40) on tqpair=0x11c4190 00:23:54.891 [2024-10-11 11:57:39.501317] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501321] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501325] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c4190) 00:23:54.891 [2024-10-11 11:57:39.501331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.891 [2024-10-11 11:57:39.501345] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230d40, cid 3, qid 0 00:23:54.891 [2024-10-11 11:57:39.501585] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.891 [2024-10-11 11:57:39.501591] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.891 [2024-10-11 11:57:39.501595] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501598] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230d40) on tqpair=0x11c4190 00:23:54.891 [2024-10-11 11:57:39.501604] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:54.891 [2024-10-11 11:57:39.501612] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:54.891 [2024-10-11 11:57:39.501622] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501626] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501630] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c4190) 00:23:54.891 [2024-10-11 11:57:39.501636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.891 [2024-10-11 11:57:39.501647] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230d40, cid 3, qid 0 00:23:54.891 [2024-10-11 11:57:39.501872] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.891 [2024-10-11 11:57:39.501878] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.891 [2024-10-11 11:57:39.501882] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230d40) on tqpair=0x11c4190 00:23:54.891 [2024-10-11 11:57:39.501896] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501900] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.891 [2024-10-11 11:57:39.501904] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c4190) 00:23:54.891 [2024-10-11 11:57:39.501910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.891 [2024-10-11 11:57:39.501921] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230d40, cid 3, qid 0 00:23:54.891 [2024-10-11 11:57:39.502098] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.892 [2024-10-11 11:57:39.502105] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.892 [2024-10-11 11:57:39.502108] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.502112] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230d40) on tqpair=0x11c4190 00:23:54.892 [2024-10-11 11:57:39.502122] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.502126] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.502129] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c4190) 00:23:54.892 [2024-10-11 11:57:39.502136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.892 [2024-10-11 11:57:39.502146] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230d40, cid 3, qid 0 00:23:54.892 [2024-10-11 11:57:39.502366] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.892 [2024-10-11 11:57:39.502377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.892 [2024-10-11 11:57:39.502381] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.502385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230d40) on tqpair=0x11c4190 00:23:54.892 [2024-10-11 11:57:39.502395] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.502399] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.502402] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c4190) 00:23:54.892 [2024-10-11 11:57:39.502409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.892 [2024-10-11 11:57:39.502419] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230d40, cid 3, qid 0 00:23:54.892 [2024-10-11 11:57:39.502640] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.892 [2024-10-11 11:57:39.502647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.892 [2024-10-11 11:57:39.502650] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.502654] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230d40) on tqpair=0x11c4190 00:23:54.892 [2024-10-11 11:57:39.502664] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.506675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.506680] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c4190) 00:23:54.892 [2024-10-11 11:57:39.506687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:54.892 [2024-10-11 11:57:39.506698] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1230d40, cid 3, qid 0 00:23:54.892 [2024-10-11 11:57:39.506881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:54.892 [2024-10-11 11:57:39.506887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:54.892 [2024-10-11 11:57:39.506890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:54.892 [2024-10-11 11:57:39.506894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1230d40) on tqpair=0x11c4190 00:23:54.892 [2024-10-11 11:57:39.506902] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:23:55.156 00:23:55.156 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:55.156 [2024-10-11 11:57:39.553868] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:23:55.156 [2024-10-11 11:57:39.553914] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109543 ] 00:23:55.156 [2024-10-11 11:57:39.591685] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:55.156 [2024-10-11 11:57:39.591741] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:55.156 [2024-10-11 11:57:39.591747] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:55.156 [2024-10-11 11:57:39.591765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:55.156 [2024-10-11 11:57:39.591775] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:55.156 [2024-10-11 11:57:39.592502] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:55.156 [2024-10-11 11:57:39.592544] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc6e190 0 00:23:55.156 [2024-10-11 11:57:39.606684] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:55.156 [2024-10-11 11:57:39.606700] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:55.156 [2024-10-11 11:57:39.606705] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:55.156 [2024-10-11 11:57:39.606709] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:55.156 [2024-10-11 11:57:39.606741] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.156 [2024-10-11 11:57:39.606747] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.156 [2024-10-11 11:57:39.606751] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.156 [2024-10-11 11:57:39.606766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:55.156 [2024-10-11 11:57:39.606790] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.156 [2024-10-11 11:57:39.614684] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.156 [2024-10-11 11:57:39.614695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.156 [2024-10-11 11:57:39.614699] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.156 [2024-10-11 11:57:39.614704] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.157 [2024-10-11 11:57:39.614714] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:55.157 [2024-10-11 11:57:39.614721] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:55.157 [2024-10-11 11:57:39.614727] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:55.157 [2024-10-11 11:57:39.614742] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.614746] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.614750] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.157 [2024-10-11 11:57:39.614760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.157 [2024-10-11 11:57:39.614779] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.157 [2024-10-11 11:57:39.614968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.157 [2024-10-11 11:57:39.614974] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.157 [2024-10-11 11:57:39.614978] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.614982] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.157 [2024-10-11 11:57:39.614988] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:55.157 [2024-10-11 11:57:39.614995] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:55.157 [2024-10-11 11:57:39.615002] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615006] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615009] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.157 [2024-10-11 11:57:39.615016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.157 [2024-10-11 11:57:39.615027] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.157 [2024-10-11 11:57:39.615243] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.157 [2024-10-11 11:57:39.615250] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.157 [2024-10-11 11:57:39.615253] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615261] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.157 [2024-10-11 11:57:39.615266] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:55.157 [2024-10-11 11:57:39.615275] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:55.157 [2024-10-11 11:57:39.615282] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615286] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615289] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.157 [2024-10-11 11:57:39.615296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.157 [2024-10-11 11:57:39.615306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.157 [2024-10-11 11:57:39.615507] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.157 [2024-10-11 11:57:39.615513] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.157 [2024-10-11 11:57:39.615516] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615520] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.157 [2024-10-11 11:57:39.615525] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:55.157 [2024-10-11 11:57:39.615535] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615539] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615542] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.157 [2024-10-11 11:57:39.615549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.157 [2024-10-11 11:57:39.615559] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.157 [2024-10-11 11:57:39.615753] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.157 [2024-10-11 11:57:39.615760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.157 [2024-10-11 11:57:39.615764] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615767] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.157 [2024-10-11 11:57:39.615772] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:55.157 [2024-10-11 11:57:39.615777] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:55.157 [2024-10-11 11:57:39.615785] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:55.157 [2024-10-11 11:57:39.615890] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:55.157 [2024-10-11 11:57:39.615894] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:55.157 [2024-10-11 11:57:39.615903] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615907] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.615910] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.157 [2024-10-11 11:57:39.615917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.157 [2024-10-11 11:57:39.615927] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.157 [2024-10-11 11:57:39.616113] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.157 [2024-10-11 11:57:39.616122] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.157 [2024-10-11 11:57:39.616126] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.616130] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.157 [2024-10-11 11:57:39.616134] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:55.157 [2024-10-11 11:57:39.616143] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.616147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.616151] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.157 [2024-10-11 11:57:39.616158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.157 [2024-10-11 11:57:39.616168] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.157 [2024-10-11 11:57:39.616416] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.157 [2024-10-11 11:57:39.616422] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.157 [2024-10-11 11:57:39.616426] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.616429] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.157 [2024-10-11 11:57:39.616434] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:55.157 [2024-10-11 11:57:39.616438] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:55.157 [2024-10-11 11:57:39.616447] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:55.157 [2024-10-11 11:57:39.616455] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:55.157 [2024-10-11 11:57:39.616465] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.616469] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.157 [2024-10-11 11:57:39.616476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.157 [2024-10-11 11:57:39.616486] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.157 [2024-10-11 11:57:39.616742] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.157 [2024-10-11 11:57:39.616749] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.157 [2024-10-11 11:57:39.616753] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.616757] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc6e190): datao=0, datal=4096, cccid=0 00:23:55.157 [2024-10-11 11:57:39.616762] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcda8c0) on tqpair(0xc6e190): expected_datao=0, payload_size=4096 00:23:55.157 [2024-10-11 11:57:39.616766] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.616785] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.616790] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.157 [2024-10-11 11:57:39.616953] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.157 [2024-10-11 11:57:39.616959] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.158 [2024-10-11 11:57:39.616962] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.616966] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.158 [2024-10-11 11:57:39.616974] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:55.158 [2024-10-11 11:57:39.616982] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:55.158 [2024-10-11 11:57:39.616986] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:55.158 [2024-10-11 11:57:39.616990] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:55.158 [2024-10-11 11:57:39.616995] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:55.158 [2024-10-11 11:57:39.617000] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.617008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.617014] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617019] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617022] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.158 [2024-10-11 11:57:39.617029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.158 [2024-10-11 11:57:39.617040] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.158 [2024-10-11 11:57:39.617258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.158 [2024-10-11 11:57:39.617264] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.158 [2024-10-11 11:57:39.617268] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617271] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.158 [2024-10-11 11:57:39.617279] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617282] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617286] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc6e190) 00:23:55.158 [2024-10-11 11:57:39.617292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.158 [2024-10-11 11:57:39.617298] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617302] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617306] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc6e190) 00:23:55.158 [2024-10-11 11:57:39.617311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.158 [2024-10-11 11:57:39.617318] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617321] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617325] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc6e190) 00:23:55.158 [2024-10-11 11:57:39.617330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.158 [2024-10-11 11:57:39.617337] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617340] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.158 [2024-10-11 11:57:39.617349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.158 [2024-10-11 11:57:39.617354] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.617365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.617374] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617378] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc6e190) 00:23:55.158 [2024-10-11 11:57:39.617385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.158 [2024-10-11 11:57:39.617396] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcda8c0, cid 0, qid 0 00:23:55.158 [2024-10-11 11:57:39.617402] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdaa40, cid 1, qid 0 00:23:55.158 [2024-10-11 11:57:39.617406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdabc0, cid 2, qid 0 00:23:55.158 [2024-10-11 11:57:39.617411] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.158 [2024-10-11 11:57:39.617416] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdaec0, cid 4, qid 0 00:23:55.158 [2024-10-11 11:57:39.617652] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.158 [2024-10-11 11:57:39.617658] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.158 [2024-10-11 11:57:39.617661] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617665] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdaec0) on tqpair=0xc6e190 00:23:55.158 [2024-10-11 11:57:39.617677] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:55.158 [2024-10-11 11:57:39.617682] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.617693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.617702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.617708] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617712] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617716] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc6e190) 00:23:55.158 [2024-10-11 11:57:39.617722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.158 [2024-10-11 11:57:39.617732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdaec0, cid 4, qid 0 00:23:55.158 [2024-10-11 11:57:39.617949] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.158 [2024-10-11 11:57:39.617956] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.158 [2024-10-11 11:57:39.617959] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.617963] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdaec0) on tqpair=0xc6e190 00:23:55.158 [2024-10-11 11:57:39.618031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.618041] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.618049] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.618053] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc6e190) 00:23:55.158 [2024-10-11 11:57:39.618059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.158 [2024-10-11 11:57:39.618069] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdaec0, cid 4, qid 0 00:23:55.158 [2024-10-11 11:57:39.618350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.158 [2024-10-11 11:57:39.618358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.158 [2024-10-11 11:57:39.618362] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.618366] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc6e190): datao=0, datal=4096, cccid=4 00:23:55.158 [2024-10-11 11:57:39.618370] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcdaec0) on tqpair(0xc6e190): expected_datao=0, payload_size=4096 00:23:55.158 [2024-10-11 11:57:39.618375] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.618382] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.618385] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.618534] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.158 [2024-10-11 11:57:39.618540] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.158 [2024-10-11 11:57:39.618544] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.618548] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdaec0) on tqpair=0xc6e190 00:23:55.158 [2024-10-11 11:57:39.618558] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:55.158 [2024-10-11 11:57:39.618572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.618582] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:55.158 [2024-10-11 11:57:39.618589] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.618593] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc6e190) 00:23:55.158 [2024-10-11 11:57:39.618599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.158 [2024-10-11 11:57:39.618610] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdaec0, cid 4, qid 0 00:23:55.158 [2024-10-11 11:57:39.622680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.158 [2024-10-11 11:57:39.622689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.158 [2024-10-11 11:57:39.622692] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.158 [2024-10-11 11:57:39.622696] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc6e190): datao=0, datal=4096, cccid=4 00:23:55.158 [2024-10-11 11:57:39.622701] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcdaec0) on tqpair(0xc6e190): expected_datao=0, payload_size=4096 00:23:55.159 [2024-10-11 11:57:39.622706] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.622712] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.622716] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.660678] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.159 [2024-10-11 11:57:39.660687] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.159 [2024-10-11 11:57:39.660691] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.660695] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdaec0) on tqpair=0xc6e190 00:23:55.159 [2024-10-11 11:57:39.660711] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:55.159 [2024-10-11 11:57:39.660721] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:55.159 [2024-10-11 11:57:39.660729] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.660733] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.660740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.159 [2024-10-11 11:57:39.660757] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdaec0, cid 4, qid 0 00:23:55.159 [2024-10-11 11:57:39.660941] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.159 [2024-10-11 11:57:39.660947] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.159 [2024-10-11 11:57:39.660951] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.660955] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc6e190): datao=0, datal=4096, cccid=4 00:23:55.159 [2024-10-11 11:57:39.660959] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcdaec0) on tqpair(0xc6e190): expected_datao=0, payload_size=4096 00:23:55.159 [2024-10-11 11:57:39.660963] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.660977] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.660981] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.701825] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.159 [2024-10-11 11:57:39.701836] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.159 [2024-10-11 11:57:39.701840] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.701844] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdaec0) on tqpair=0xc6e190 00:23:55.159 [2024-10-11 11:57:39.701854] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:55.159 [2024-10-11 11:57:39.701863] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:55.159 [2024-10-11 11:57:39.701873] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:55.159 [2024-10-11 11:57:39.701880] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:55.159 [2024-10-11 11:57:39.701885] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:55.159 [2024-10-11 11:57:39.701890] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:55.159 [2024-10-11 11:57:39.701896] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:55.159 [2024-10-11 11:57:39.701900] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:55.159 [2024-10-11 11:57:39.701907] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:55.159 [2024-10-11 11:57:39.701923] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.701928] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.701935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.159 [2024-10-11 11:57:39.701943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.701947] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.701950] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.701957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.159 [2024-10-11 11:57:39.701970] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdaec0, cid 4, qid 0 00:23:55.159 [2024-10-11 11:57:39.701975] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdb040, cid 5, qid 0 00:23:55.159 [2024-10-11 11:57:39.702105] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.159 [2024-10-11 11:57:39.702115] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.159 [2024-10-11 11:57:39.702119] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.702123] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdaec0) on tqpair=0xc6e190 00:23:55.159 [2024-10-11 11:57:39.702131] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.159 [2024-10-11 11:57:39.702137] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.159 [2024-10-11 11:57:39.702140] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.702144] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdb040) on tqpair=0xc6e190 00:23:55.159 [2024-10-11 11:57:39.702154] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.702158] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.702164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.159 [2024-10-11 11:57:39.702175] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdb040, cid 5, qid 0 00:23:55.159 [2024-10-11 11:57:39.702361] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.159 [2024-10-11 11:57:39.702368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.159 [2024-10-11 11:57:39.702371] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.702375] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdb040) on tqpair=0xc6e190 00:23:55.159 [2024-10-11 11:57:39.702384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.702388] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.702395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.159 [2024-10-11 11:57:39.702405] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdb040, cid 5, qid 0 00:23:55.159 [2024-10-11 11:57:39.702626] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.159 [2024-10-11 11:57:39.702632] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.159 [2024-10-11 11:57:39.702636] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.702640] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdb040) on tqpair=0xc6e190 00:23:55.159 [2024-10-11 11:57:39.702649] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.702653] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.702660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.159 [2024-10-11 11:57:39.706679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdb040, cid 5, qid 0 00:23:55.159 [2024-10-11 11:57:39.706880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.159 [2024-10-11 11:57:39.706887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.159 [2024-10-11 11:57:39.706891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.706895] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdb040) on tqpair=0xc6e190 00:23:55.159 [2024-10-11 11:57:39.706913] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.706917] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.706924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.159 [2024-10-11 11:57:39.706932] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.706936] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.706945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.159 [2024-10-11 11:57:39.706953] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.706957] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.706963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.159 [2024-10-11 11:57:39.706973] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.159 [2024-10-11 11:57:39.706978] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc6e190) 00:23:55.159 [2024-10-11 11:57:39.706984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.159 [2024-10-11 11:57:39.706996] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdb040, cid 5, qid 0 00:23:55.160 [2024-10-11 11:57:39.707001] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdaec0, cid 4, qid 0 00:23:55.160 [2024-10-11 11:57:39.707006] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdb1c0, cid 6, qid 0 00:23:55.160 [2024-10-11 11:57:39.707011] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdb340, cid 7, qid 0 00:23:55.160 [2024-10-11 11:57:39.707317] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.160 [2024-10-11 11:57:39.707324] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.160 [2024-10-11 11:57:39.707327] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707331] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc6e190): datao=0, datal=8192, cccid=5 00:23:55.160 [2024-10-11 11:57:39.707336] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcdb040) on tqpair(0xc6e190): expected_datao=0, payload_size=8192 00:23:55.160 [2024-10-11 11:57:39.707340] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707422] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707427] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707433] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.160 [2024-10-11 11:57:39.707439] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.160 [2024-10-11 11:57:39.707442] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707447] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc6e190): datao=0, datal=512, cccid=4 00:23:55.160 [2024-10-11 11:57:39.707451] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcdaec0) on tqpair(0xc6e190): expected_datao=0, payload_size=512 00:23:55.160 [2024-10-11 11:57:39.707456] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707463] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707466] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707472] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.160 [2024-10-11 11:57:39.707478] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.160 [2024-10-11 11:57:39.707482] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707486] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc6e190): datao=0, datal=512, cccid=6 00:23:55.160 [2024-10-11 11:57:39.707490] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcdb1c0) on tqpair(0xc6e190): expected_datao=0, payload_size=512 00:23:55.160 [2024-10-11 11:57:39.707494] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707501] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707507] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707512] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:55.160 [2024-10-11 11:57:39.707518] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:55.160 [2024-10-11 11:57:39.707522] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707526] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc6e190): datao=0, datal=4096, cccid=7 00:23:55.160 [2024-10-11 11:57:39.707530] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcdb340) on tqpair(0xc6e190): expected_datao=0, payload_size=4096 00:23:55.160 [2024-10-11 11:57:39.707535] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707542] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707546] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707568] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.160 [2024-10-11 11:57:39.707574] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.160 [2024-10-11 11:57:39.707578] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707582] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdb040) on tqpair=0xc6e190 00:23:55.160 [2024-10-11 11:57:39.707595] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.160 [2024-10-11 11:57:39.707601] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.160 [2024-10-11 11:57:39.707604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdaec0) on tqpair=0xc6e190 00:23:55.160 [2024-10-11 11:57:39.707620] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.160 [2024-10-11 11:57:39.707626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.160 [2024-10-11 11:57:39.707630] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdb1c0) on tqpair=0xc6e190 00:23:55.160 [2024-10-11 11:57:39.707641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.160 [2024-10-11 11:57:39.707647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.160 [2024-10-11 11:57:39.707650] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.160 [2024-10-11 11:57:39.707655] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdb340) on tqpair=0xc6e190 00:23:55.160 ===================================================== 00:23:55.160 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.160 ===================================================== 00:23:55.160 Controller Capabilities/Features 00:23:55.160 ================================ 00:23:55.160 Vendor ID: 8086 00:23:55.160 Subsystem Vendor ID: 8086 00:23:55.160 Serial Number: SPDK00000000000001 00:23:55.160 Model Number: SPDK bdev Controller 00:23:55.160 Firmware Version: 25.01 00:23:55.160 Recommended Arb Burst: 6 00:23:55.160 IEEE OUI Identifier: e4 d2 5c 00:23:55.160 Multi-path I/O 00:23:55.160 May have multiple subsystem ports: Yes 00:23:55.160 May have multiple controllers: Yes 00:23:55.160 Associated with SR-IOV VF: No 00:23:55.160 Max Data Transfer Size: 131072 00:23:55.160 Max Number of Namespaces: 32 00:23:55.160 Max Number of I/O Queues: 127 00:23:55.160 NVMe Specification Version (VS): 1.3 00:23:55.160 NVMe Specification Version (Identify): 1.3 00:23:55.160 Maximum Queue Entries: 128 00:23:55.160 Contiguous Queues Required: Yes 00:23:55.160 Arbitration Mechanisms Supported 00:23:55.160 Weighted Round Robin: Not Supported 00:23:55.160 Vendor Specific: Not Supported 00:23:55.160 Reset Timeout: 15000 ms 00:23:55.160 Doorbell Stride: 4 bytes 00:23:55.160 NVM Subsystem Reset: Not Supported 00:23:55.160 Command Sets Supported 00:23:55.160 NVM Command Set: Supported 00:23:55.160 Boot Partition: Not Supported 00:23:55.160 Memory Page Size Minimum: 4096 bytes 00:23:55.160 Memory Page Size Maximum: 4096 bytes 00:23:55.160 Persistent Memory Region: Not Supported 00:23:55.160 Optional Asynchronous Events Supported 00:23:55.160 Namespace Attribute Notices: Supported 00:23:55.160 Firmware Activation Notices: Not Supported 00:23:55.160 ANA Change Notices: Not Supported 00:23:55.160 PLE Aggregate Log Change Notices: Not Supported 00:23:55.160 LBA Status Info Alert Notices: Not Supported 00:23:55.160 EGE Aggregate Log Change Notices: Not Supported 00:23:55.160 Normal NVM Subsystem Shutdown event: Not Supported 00:23:55.160 Zone Descriptor Change Notices: Not Supported 00:23:55.160 Discovery Log Change Notices: Not Supported 00:23:55.160 Controller Attributes 00:23:55.160 128-bit Host Identifier: Supported 00:23:55.160 Non-Operational Permissive Mode: Not Supported 00:23:55.160 NVM Sets: Not Supported 00:23:55.160 Read Recovery Levels: Not Supported 00:23:55.160 Endurance Groups: Not Supported 00:23:55.160 Predictable Latency Mode: Not Supported 00:23:55.160 Traffic Based Keep ALive: Not Supported 00:23:55.160 Namespace Granularity: Not Supported 00:23:55.160 SQ Associations: Not Supported 00:23:55.160 UUID List: Not Supported 00:23:55.160 Multi-Domain Subsystem: Not Supported 00:23:55.160 Fixed Capacity Management: Not Supported 00:23:55.160 Variable Capacity Management: Not Supported 00:23:55.160 Delete Endurance Group: Not Supported 00:23:55.160 Delete NVM Set: Not Supported 00:23:55.160 Extended LBA Formats Supported: Not Supported 00:23:55.160 Flexible Data Placement Supported: Not Supported 00:23:55.160 00:23:55.160 Controller Memory Buffer Support 00:23:55.160 ================================ 00:23:55.160 Supported: No 00:23:55.160 00:23:55.160 Persistent Memory Region Support 00:23:55.160 ================================ 00:23:55.160 Supported: No 00:23:55.160 00:23:55.160 Admin Command Set Attributes 00:23:55.160 ============================ 00:23:55.160 Security Send/Receive: Not Supported 00:23:55.160 Format NVM: Not Supported 00:23:55.160 Firmware Activate/Download: Not Supported 00:23:55.160 Namespace Management: Not Supported 00:23:55.160 Device Self-Test: Not Supported 00:23:55.160 Directives: Not Supported 00:23:55.160 NVMe-MI: Not Supported 00:23:55.160 Virtualization Management: Not Supported 00:23:55.160 Doorbell Buffer Config: Not Supported 00:23:55.160 Get LBA Status Capability: Not Supported 00:23:55.160 Command & Feature Lockdown Capability: Not Supported 00:23:55.160 Abort Command Limit: 4 00:23:55.160 Async Event Request Limit: 4 00:23:55.160 Number of Firmware Slots: N/A 00:23:55.160 Firmware Slot 1 Read-Only: N/A 00:23:55.160 Firmware Activation Without Reset: N/A 00:23:55.160 Multiple Update Detection Support: N/A 00:23:55.160 Firmware Update Granularity: No Information Provided 00:23:55.160 Per-Namespace SMART Log: No 00:23:55.160 Asymmetric Namespace Access Log Page: Not Supported 00:23:55.160 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:55.160 Command Effects Log Page: Supported 00:23:55.160 Get Log Page Extended Data: Supported 00:23:55.160 Telemetry Log Pages: Not Supported 00:23:55.160 Persistent Event Log Pages: Not Supported 00:23:55.160 Supported Log Pages Log Page: May Support 00:23:55.160 Commands Supported & Effects Log Page: Not Supported 00:23:55.160 Feature Identifiers & Effects Log Page:May Support 00:23:55.160 NVMe-MI Commands & Effects Log Page: May Support 00:23:55.160 Data Area 4 for Telemetry Log: Not Supported 00:23:55.160 Error Log Page Entries Supported: 128 00:23:55.160 Keep Alive: Supported 00:23:55.160 Keep Alive Granularity: 10000 ms 00:23:55.160 00:23:55.160 NVM Command Set Attributes 00:23:55.160 ========================== 00:23:55.160 Submission Queue Entry Size 00:23:55.160 Max: 64 00:23:55.161 Min: 64 00:23:55.161 Completion Queue Entry Size 00:23:55.161 Max: 16 00:23:55.161 Min: 16 00:23:55.161 Number of Namespaces: 32 00:23:55.161 Compare Command: Supported 00:23:55.161 Write Uncorrectable Command: Not Supported 00:23:55.161 Dataset Management Command: Supported 00:23:55.161 Write Zeroes Command: Supported 00:23:55.161 Set Features Save Field: Not Supported 00:23:55.161 Reservations: Supported 00:23:55.161 Timestamp: Not Supported 00:23:55.161 Copy: Supported 00:23:55.161 Volatile Write Cache: Present 00:23:55.161 Atomic Write Unit (Normal): 1 00:23:55.161 Atomic Write Unit (PFail): 1 00:23:55.161 Atomic Compare & Write Unit: 1 00:23:55.161 Fused Compare & Write: Supported 00:23:55.161 Scatter-Gather List 00:23:55.161 SGL Command Set: Supported 00:23:55.161 SGL Keyed: Supported 00:23:55.161 SGL Bit Bucket Descriptor: Not Supported 00:23:55.161 SGL Metadata Pointer: Not Supported 00:23:55.161 Oversized SGL: Not Supported 00:23:55.161 SGL Metadata Address: Not Supported 00:23:55.161 SGL Offset: Supported 00:23:55.161 Transport SGL Data Block: Not Supported 00:23:55.161 Replay Protected Memory Block: Not Supported 00:23:55.161 00:23:55.161 Firmware Slot Information 00:23:55.161 ========================= 00:23:55.161 Active slot: 1 00:23:55.161 Slot 1 Firmware Revision: 25.01 00:23:55.161 00:23:55.161 00:23:55.161 Commands Supported and Effects 00:23:55.161 ============================== 00:23:55.161 Admin Commands 00:23:55.161 -------------- 00:23:55.161 Get Log Page (02h): Supported 00:23:55.161 Identify (06h): Supported 00:23:55.161 Abort (08h): Supported 00:23:55.161 Set Features (09h): Supported 00:23:55.161 Get Features (0Ah): Supported 00:23:55.161 Asynchronous Event Request (0Ch): Supported 00:23:55.161 Keep Alive (18h): Supported 00:23:55.161 I/O Commands 00:23:55.161 ------------ 00:23:55.161 Flush (00h): Supported LBA-Change 00:23:55.161 Write (01h): Supported LBA-Change 00:23:55.161 Read (02h): Supported 00:23:55.161 Compare (05h): Supported 00:23:55.161 Write Zeroes (08h): Supported LBA-Change 00:23:55.161 Dataset Management (09h): Supported LBA-Change 00:23:55.161 Copy (19h): Supported LBA-Change 00:23:55.161 00:23:55.161 Error Log 00:23:55.161 ========= 00:23:55.161 00:23:55.161 Arbitration 00:23:55.161 =========== 00:23:55.161 Arbitration Burst: 1 00:23:55.161 00:23:55.161 Power Management 00:23:55.161 ================ 00:23:55.161 Number of Power States: 1 00:23:55.161 Current Power State: Power State #0 00:23:55.161 Power State #0: 00:23:55.161 Max Power: 0.00 W 00:23:55.161 Non-Operational State: Operational 00:23:55.161 Entry Latency: Not Reported 00:23:55.161 Exit Latency: Not Reported 00:23:55.161 Relative Read Throughput: 0 00:23:55.161 Relative Read Latency: 0 00:23:55.161 Relative Write Throughput: 0 00:23:55.161 Relative Write Latency: 0 00:23:55.161 Idle Power: Not Reported 00:23:55.161 Active Power: Not Reported 00:23:55.161 Non-Operational Permissive Mode: Not Supported 00:23:55.161 00:23:55.161 Health Information 00:23:55.161 ================== 00:23:55.161 Critical Warnings: 00:23:55.161 Available Spare Space: OK 00:23:55.161 Temperature: OK 00:23:55.161 Device Reliability: OK 00:23:55.161 Read Only: No 00:23:55.161 Volatile Memory Backup: OK 00:23:55.161 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:55.161 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:55.161 Available Spare: 0% 00:23:55.161 Available Spare Threshold: 0% 00:23:55.161 Life Percentage Used:[2024-10-11 11:57:39.707769] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.707775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc6e190) 00:23:55.161 [2024-10-11 11:57:39.707782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.161 [2024-10-11 11:57:39.707794] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdb340, cid 7, qid 0 00:23:55.161 [2024-10-11 11:57:39.708009] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.161 [2024-10-11 11:57:39.708016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.161 [2024-10-11 11:57:39.708019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708024] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdb340) on tqpair=0xc6e190 00:23:55.161 [2024-10-11 11:57:39.708059] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:55.161 [2024-10-11 11:57:39.708069] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcda8c0) on tqpair=0xc6e190 00:23:55.161 [2024-10-11 11:57:39.708076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.161 [2024-10-11 11:57:39.708082] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdaa40) on tqpair=0xc6e190 00:23:55.161 [2024-10-11 11:57:39.708087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.161 [2024-10-11 11:57:39.708094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdabc0) on tqpair=0xc6e190 00:23:55.161 [2024-10-11 11:57:39.708099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.161 [2024-10-11 11:57:39.708104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.161 [2024-10-11 11:57:39.708109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.161 [2024-10-11 11:57:39.708117] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.161 [2024-10-11 11:57:39.708133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.161 [2024-10-11 11:57:39.708145] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.161 [2024-10-11 11:57:39.708361] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.161 [2024-10-11 11:57:39.708368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.161 [2024-10-11 11:57:39.708371] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708375] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.161 [2024-10-11 11:57:39.708382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708386] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708390] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.161 [2024-10-11 11:57:39.708397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.161 [2024-10-11 11:57:39.708411] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.161 [2024-10-11 11:57:39.708618] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.161 [2024-10-11 11:57:39.708624] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.161 [2024-10-11 11:57:39.708628] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708632] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.161 [2024-10-11 11:57:39.708636] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:55.161 [2024-10-11 11:57:39.708642] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:55.161 [2024-10-11 11:57:39.708652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708656] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708660] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.161 [2024-10-11 11:57:39.708674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.161 [2024-10-11 11:57:39.708685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.161 [2024-10-11 11:57:39.708895] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.161 [2024-10-11 11:57:39.708902] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.161 [2024-10-11 11:57:39.708906] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708910] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.161 [2024-10-11 11:57:39.708920] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.161 [2024-10-11 11:57:39.708924] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.708930] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.162 [2024-10-11 11:57:39.708938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.162 [2024-10-11 11:57:39.708949] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.162 [2024-10-11 11:57:39.709163] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.162 [2024-10-11 11:57:39.709170] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.162 [2024-10-11 11:57:39.709173] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709177] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.162 [2024-10-11 11:57:39.709188] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709193] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.162 [2024-10-11 11:57:39.709203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.162 [2024-10-11 11:57:39.709213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.162 [2024-10-11 11:57:39.709400] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.162 [2024-10-11 11:57:39.709406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.162 [2024-10-11 11:57:39.709409] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709414] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.162 [2024-10-11 11:57:39.709424] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709428] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709432] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.162 [2024-10-11 11:57:39.709439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.162 [2024-10-11 11:57:39.709450] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.162 [2024-10-11 11:57:39.709649] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.162 [2024-10-11 11:57:39.709656] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.162 [2024-10-11 11:57:39.709659] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709663] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.162 [2024-10-11 11:57:39.709683] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709687] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709691] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.162 [2024-10-11 11:57:39.709698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.162 [2024-10-11 11:57:39.709709] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.162 [2024-10-11 11:57:39.709891] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.162 [2024-10-11 11:57:39.709898] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.162 [2024-10-11 11:57:39.709901] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709905] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.162 [2024-10-11 11:57:39.709915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709920] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.709923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.162 [2024-10-11 11:57:39.709934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.162 [2024-10-11 11:57:39.709945] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.162 [2024-10-11 11:57:39.710155] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.162 [2024-10-11 11:57:39.710161] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.162 [2024-10-11 11:57:39.710165] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.710169] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.162 [2024-10-11 11:57:39.710179] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.710183] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.710186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.162 [2024-10-11 11:57:39.710194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.162 [2024-10-11 11:57:39.710204] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.162 [2024-10-11 11:57:39.710376] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.162 [2024-10-11 11:57:39.710383] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.162 [2024-10-11 11:57:39.710387] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.710391] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.162 [2024-10-11 11:57:39.710401] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.710405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.710409] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.162 [2024-10-11 11:57:39.710416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.162 [2024-10-11 11:57:39.710426] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.162 [2024-10-11 11:57:39.710599] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.162 [2024-10-11 11:57:39.710606] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.162 [2024-10-11 11:57:39.710609] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.710613] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.162 [2024-10-11 11:57:39.710623] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.710627] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.710631] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc6e190) 00:23:55.162 [2024-10-11 11:57:39.710638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.162 [2024-10-11 11:57:39.710649] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcdad40, cid 3, qid 0 00:23:55.162 [2024-10-11 11:57:39.714678] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:55.162 [2024-10-11 11:57:39.714686] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:55.162 [2024-10-11 11:57:39.714690] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:55.162 [2024-10-11 11:57:39.714694] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcdad40) on tqpair=0xc6e190 00:23:55.162 [2024-10-11 11:57:39.714702] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:23:55.162 0% 00:23:55.162 Data Units Read: 0 00:23:55.162 Data Units Written: 0 00:23:55.162 Host Read Commands: 0 00:23:55.162 Host Write Commands: 0 00:23:55.162 Controller Busy Time: 0 minutes 00:23:55.162 Power Cycles: 0 00:23:55.162 Power On Hours: 0 hours 00:23:55.162 Unsafe Shutdowns: 0 00:23:55.162 Unrecoverable Media Errors: 0 00:23:55.162 Lifetime Error Log Entries: 0 00:23:55.162 Warning Temperature Time: 0 minutes 00:23:55.162 Critical Temperature Time: 0 minutes 00:23:55.162 00:23:55.162 Number of Queues 00:23:55.162 ================ 00:23:55.162 Number of I/O Submission Queues: 127 00:23:55.162 Number of I/O Completion Queues: 127 00:23:55.162 00:23:55.162 Active Namespaces 00:23:55.162 ================= 00:23:55.162 Namespace ID:1 00:23:55.162 Error Recovery Timeout: Unlimited 00:23:55.162 Command Set Identifier: NVM (00h) 00:23:55.162 Deallocate: Supported 00:23:55.162 Deallocated/Unwritten Error: Not Supported 00:23:55.162 Deallocated Read Value: Unknown 00:23:55.162 Deallocate in Write Zeroes: Not Supported 00:23:55.162 Deallocated Guard Field: 0xFFFF 00:23:55.162 Flush: Supported 00:23:55.162 Reservation: Supported 00:23:55.162 Namespace Sharing Capabilities: Multiple Controllers 00:23:55.162 Size (in LBAs): 131072 (0GiB) 00:23:55.162 Capacity (in LBAs): 131072 (0GiB) 00:23:55.162 Utilization (in LBAs): 131072 (0GiB) 00:23:55.162 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:55.162 EUI64: ABCDEF0123456789 00:23:55.162 UUID: d4907451-b09e-45a4-8c07-dc76b6f85d1b 00:23:55.162 Thin Provisioning: Not Supported 00:23:55.162 Per-NS Atomic Units: Yes 00:23:55.162 Atomic Boundary Size (Normal): 0 00:23:55.162 Atomic Boundary Size (PFail): 0 00:23:55.162 Atomic Boundary Offset: 0 00:23:55.162 Maximum Single Source Range Length: 65535 00:23:55.162 Maximum Copy Length: 65535 00:23:55.162 Maximum Source Range Count: 1 00:23:55.162 NGUID/EUI64 Never Reused: No 00:23:55.162 Namespace Write Protected: No 00:23:55.162 Number of LBA Formats: 1 00:23:55.162 Current LBA Format: LBA Format #00 00:23:55.162 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:55.162 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.162 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.162 rmmod nvme_tcp 00:23:55.162 rmmod nvme_fabrics 00:23:55.423 rmmod nvme_keyring 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1109210 ']' 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1109210 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1109210 ']' 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1109210 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1109210 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1109210' 00:23:55.423 killing process with pid 1109210 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1109210 00:23:55.423 11:57:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1109210 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.683 11:57:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.593 11:57:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:57.593 00:23:57.593 real 0m11.769s 00:23:57.593 user 0m8.914s 00:23:57.593 sys 0m6.225s 00:23:57.593 11:57:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:57.593 11:57:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:57.593 ************************************ 00:23:57.593 END TEST nvmf_identify 00:23:57.593 ************************************ 00:23:57.593 11:57:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:57.593 11:57:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:57.593 11:57:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:57.593 11:57:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.854 ************************************ 00:23:57.854 START TEST nvmf_perf 00:23:57.854 ************************************ 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:57.854 * Looking for test storage... 00:23:57.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.854 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:57.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.855 --rc genhtml_branch_coverage=1 00:23:57.855 --rc genhtml_function_coverage=1 00:23:57.855 --rc genhtml_legend=1 00:23:57.855 --rc geninfo_all_blocks=1 00:23:57.855 --rc geninfo_unexecuted_blocks=1 00:23:57.855 00:23:57.855 ' 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:57.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.855 --rc genhtml_branch_coverage=1 00:23:57.855 --rc genhtml_function_coverage=1 00:23:57.855 --rc genhtml_legend=1 00:23:57.855 --rc geninfo_all_blocks=1 00:23:57.855 --rc geninfo_unexecuted_blocks=1 00:23:57.855 00:23:57.855 ' 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:57.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.855 --rc genhtml_branch_coverage=1 00:23:57.855 --rc genhtml_function_coverage=1 00:23:57.855 --rc genhtml_legend=1 00:23:57.855 --rc geninfo_all_blocks=1 00:23:57.855 --rc geninfo_unexecuted_blocks=1 00:23:57.855 00:23:57.855 ' 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:57.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.855 --rc genhtml_branch_coverage=1 00:23:57.855 --rc genhtml_function_coverage=1 00:23:57.855 --rc genhtml_legend=1 00:23:57.855 --rc geninfo_all_blocks=1 00:23:57.855 --rc geninfo_unexecuted_blocks=1 00:23:57.855 00:23:57.855 ' 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.855 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.116 11:57:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.250 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:06.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:06.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:06.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:06.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:24:06.251 00:24:06.251 --- 10.0.0.2 ping statistics --- 00:24:06.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.251 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:24:06.251 00:24:06.251 --- 10.0.0.1 ping statistics --- 00:24:06.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.251 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:06.251 11:57:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1113864 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1113864 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1113864 ']' 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.251 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.251 [2024-10-11 11:57:50.077037] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:24:06.251 [2024-10-11 11:57:50.077106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.251 [2024-10-11 11:57:50.166917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.251 [2024-10-11 11:57:50.221428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.251 [2024-10-11 11:57:50.221482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.251 [2024-10-11 11:57:50.221491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.251 [2024-10-11 11:57:50.221498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.251 [2024-10-11 11:57:50.221504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.251 [2024-10-11 11:57:50.223732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.251 [2024-10-11 11:57:50.223976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.251 [2024-10-11 11:57:50.223977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.252 [2024-10-11 11:57:50.223819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.512 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.512 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:06.512 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:06.512 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:06.512 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.512 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.512 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:06.512 11:57:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:07.083 11:57:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:07.083 11:57:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:07.083 11:57:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:07.083 11:57:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.343 11:57:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:07.343 11:57:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:07.343 11:57:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:07.343 11:57:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:07.343 11:57:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:07.604 [2024-10-11 11:57:52.066730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.604 11:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:07.864 11:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:07.864 11:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.124 11:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:08.124 11:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:08.124 11:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.385 [2024-10-11 11:57:52.870472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.385 11:57:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:08.645 11:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:08.645 11:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:08.645 11:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:08.645 11:57:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:10.027 Initializing NVMe Controllers 00:24:10.027 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:10.027 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:10.027 Initialization complete. Launching workers. 00:24:10.027 ======================================================== 00:24:10.027 Latency(us) 00:24:10.027 Device Information : IOPS MiB/s Average min max 00:24:10.027 PCIE (0000:65:00.0) NSID 1 from core 0: 78088.72 305.03 409.14 17.61 4957.29 00:24:10.027 ======================================================== 00:24:10.027 Total : 78088.72 305.03 409.14 17.61 4957.29 00:24:10.027 00:24:10.027 11:57:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.412 Initializing NVMe Controllers 00:24:11.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.412 Initialization complete. Launching workers. 00:24:11.412 ======================================================== 00:24:11.412 Latency(us) 00:24:11.412 Device Information : IOPS MiB/s Average min max 00:24:11.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 117.00 0.46 8801.40 240.12 45655.76 00:24:11.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15221.14 5985.90 49881.57 00:24:11.412 ======================================================== 00:24:11.412 Total : 183.00 0.71 11116.71 240.12 49881.57 00:24:11.412 00:24:11.412 11:57:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:12.797 Initializing NVMe Controllers 00:24:12.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:12.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:12.797 Initialization complete. Launching workers. 00:24:12.797 ======================================================== 00:24:12.797 Latency(us) 00:24:12.797 Device Information : IOPS MiB/s Average min max 00:24:12.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11646.00 45.49 2749.80 274.52 7603.56 00:24:12.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3811.00 14.89 8481.44 4490.61 22419.13 00:24:12.797 ======================================================== 00:24:12.797 Total : 15457.00 60.38 4162.96 274.52 22419.13 00:24:12.797 00:24:12.797 11:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:12.797 11:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:12.797 11:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.418 Initializing NVMe Controllers 00:24:15.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.418 Controller IO queue size 128, less than required. 00:24:15.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.418 Controller IO queue size 128, less than required. 00:24:15.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:15.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:15.418 Initialization complete. Launching workers. 00:24:15.418 ======================================================== 00:24:15.418 Latency(us) 00:24:15.418 Device Information : IOPS MiB/s Average min max 00:24:15.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1836.18 459.05 71099.76 39358.42 125609.89 00:24:15.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 625.39 156.35 218919.12 63347.48 338095.56 00:24:15.419 ======================================================== 00:24:15.419 Total : 2461.57 615.39 108655.00 39358.42 338095.56 00:24:15.419 00:24:15.419 11:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:15.419 No valid NVMe controllers or AIO or URING devices found 00:24:15.419 Initializing NVMe Controllers 00:24:15.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.419 Controller IO queue size 128, less than required. 00:24:15.419 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.419 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:15.419 Controller IO queue size 128, less than required. 00:24:15.419 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.419 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:15.419 WARNING: Some requested NVMe devices were skipped 00:24:15.419 11:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:17.496 Initializing NVMe Controllers 00:24:17.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:17.496 Controller IO queue size 128, less than required. 00:24:17.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.496 Controller IO queue size 128, less than required. 00:24:17.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:17.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:17.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:17.496 Initialization complete. Launching workers. 00:24:17.496 00:24:17.496 ==================== 00:24:17.496 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:17.496 TCP transport: 00:24:17.496 polls: 40268 00:24:17.496 idle_polls: 25253 00:24:17.496 sock_completions: 15015 00:24:17.496 nvme_completions: 7333 00:24:17.496 submitted_requests: 11028 00:24:17.496 queued_requests: 1 00:24:17.496 00:24:17.496 ==================== 00:24:17.496 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:17.496 TCP transport: 00:24:17.496 polls: 40116 00:24:17.496 idle_polls: 26156 00:24:17.496 sock_completions: 13960 00:24:17.496 nvme_completions: 7205 00:24:17.496 submitted_requests: 10852 00:24:17.496 queued_requests: 1 00:24:17.496 ======================================================== 00:24:17.496 Latency(us) 00:24:17.496 Device Information : IOPS MiB/s Average min max 00:24:17.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1831.48 457.87 71861.53 31369.30 135495.50 00:24:17.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1799.51 449.88 71991.02 33838.85 127381.32 00:24:17.496 ======================================================== 00:24:17.496 Total : 3630.99 907.75 71925.71 31369.30 135495.50 00:24:17.496 00:24:17.496 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:17.496 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.756 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.757 rmmod nvme_tcp 00:24:17.757 rmmod nvme_fabrics 00:24:17.757 rmmod nvme_keyring 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1113864 ']' 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1113864 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1113864 ']' 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1113864 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1113864 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1113864' 00:24:17.757 killing process with pid 1113864 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1113864 00:24:17.757 11:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1113864 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.300 11:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:22.215 00:24:22.215 real 0m24.157s 00:24:22.215 user 0m58.131s 00:24:22.215 sys 0m8.475s 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:22.215 ************************************ 00:24:22.215 END TEST nvmf_perf 00:24:22.215 ************************************ 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.215 ************************************ 00:24:22.215 START TEST nvmf_fio_host 00:24:22.215 ************************************ 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:22.215 * Looking for test storage... 00:24:22.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:22.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.215 --rc genhtml_branch_coverage=1 00:24:22.215 --rc genhtml_function_coverage=1 00:24:22.215 --rc genhtml_legend=1 00:24:22.215 --rc geninfo_all_blocks=1 00:24:22.215 --rc geninfo_unexecuted_blocks=1 00:24:22.215 00:24:22.215 ' 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:22.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.215 --rc genhtml_branch_coverage=1 00:24:22.215 --rc genhtml_function_coverage=1 00:24:22.215 --rc genhtml_legend=1 00:24:22.215 --rc geninfo_all_blocks=1 00:24:22.215 --rc geninfo_unexecuted_blocks=1 00:24:22.215 00:24:22.215 ' 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:22.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.215 --rc genhtml_branch_coverage=1 00:24:22.215 --rc genhtml_function_coverage=1 00:24:22.215 --rc genhtml_legend=1 00:24:22.215 --rc geninfo_all_blocks=1 00:24:22.215 --rc geninfo_unexecuted_blocks=1 00:24:22.215 00:24:22.215 ' 00:24:22.215 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:22.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.216 --rc genhtml_branch_coverage=1 00:24:22.216 --rc genhtml_function_coverage=1 00:24:22.216 --rc genhtml_legend=1 00:24:22.216 --rc geninfo_all_blocks=1 00:24:22.216 --rc geninfo_unexecuted_blocks=1 00:24:22.216 00:24:22.216 ' 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:22.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:22.216 11:58:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.357 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:30.358 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:30.358 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:30.358 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:30.358 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.358 11:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:30.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:24:30.358 00:24:30.358 --- 10.0.0.2 ping statistics --- 00:24:30.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.358 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:24:30.358 00:24:30.358 --- 10.0.0.1 ping statistics --- 00:24:30.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.358 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.358 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1120834 00:24:30.359 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.359 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:30.359 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1120834 00:24:30.359 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1120834 ']' 00:24:30.359 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.359 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.359 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.359 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.359 11:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.359 [2024-10-11 11:58:14.275329] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:24:30.359 [2024-10-11 11:58:14.275395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.359 [2024-10-11 11:58:14.365643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.359 [2024-10-11 11:58:14.419959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.359 [2024-10-11 11:58:14.420016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.359 [2024-10-11 11:58:14.420024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.359 [2024-10-11 11:58:14.420031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.359 [2024-10-11 11:58:14.420038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.359 [2024-10-11 11:58:14.422421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.359 [2024-10-11 11:58:14.422584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.359 [2024-10-11 11:58:14.422740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.359 [2024-10-11 11:58:14.422766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.620 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.620 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:30.620 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:30.881 [2024-10-11 11:58:15.267387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.881 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:30.881 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.882 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.882 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:31.143 Malloc1 00:24:31.143 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:31.405 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:31.405 11:58:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.667 [2024-10-11 11:58:16.145643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.667 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:31.928 11:58:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.190 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:32.190 fio-3.35 00:24:32.190 Starting 1 thread 00:24:34.733 00:24:34.733 test: (groupid=0, jobs=1): err= 0: pid=1121477: Fri Oct 11 11:58:19 2024 00:24:34.733 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2005msec) 00:24:34.733 slat (usec): min=2, max=278, avg= 2.16, stdev= 2.39 00:24:34.733 clat (usec): min=3337, max=9701, avg=5106.08, stdev=392.73 00:24:34.733 lat (usec): min=3339, max=9714, avg=5108.24, stdev=392.96 00:24:34.733 clat percentiles (usec): 00:24:34.733 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:34.733 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:24:34.733 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5604], 00:24:34.733 | 99.00th=[ 5932], 99.50th=[ 6325], 99.90th=[ 8717], 99.95th=[ 8979], 00:24:34.733 | 99.99th=[ 9634] 00:24:34.733 bw ( KiB/s): min=54096, max=55824, per=100.00%, avg=55238.00, stdev=775.61, samples=4 00:24:34.733 iops : min=13524, max=13956, avg=13809.50, stdev=193.90, samples=4 00:24:34.733 write: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2005msec); 0 zone resets 00:24:34.733 slat (usec): min=2, max=274, avg= 2.23, stdev= 1.83 00:24:34.733 clat (usec): min=2342, max=8324, avg=4127.68, stdev=348.98 00:24:34.733 lat (usec): min=2344, max=8326, avg=4129.91, stdev=349.29 00:24:34.733 clat percentiles (usec): 00:24:34.733 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:24:34.733 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:34.733 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:24:34.733 | 99.00th=[ 4883], 99.50th=[ 5735], 99.90th=[ 7701], 99.95th=[ 7898], 00:24:34.733 | 99.99th=[ 8291] 00:24:34.733 bw ( KiB/s): min=54384, max=55616, per=100.00%, avg=55190.00, stdev=555.77, samples=4 00:24:34.733 iops : min=13596, max=13904, avg=13797.50, stdev=138.94, samples=4 00:24:34.733 lat (msec) : 4=16.59%, 10=83.41% 00:24:34.733 cpu : usr=77.94%, sys=20.66%, ctx=45, majf=0, minf=8 00:24:34.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:34.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:34.733 issued rwts: total=27678,27663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:34.733 00:24:34.733 Run status group 0 (all jobs): 00:24:34.733 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2005-2005msec 00:24:34.733 WRITE: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2005-2005msec 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:34.733 11:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:34.991 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:34.991 fio-3.35 00:24:34.991 Starting 1 thread 00:24:37.530 00:24:37.530 test: (groupid=0, jobs=1): err= 0: pid=1122294: Fri Oct 11 11:58:22 2024 00:24:37.530 read: IOPS=9522, BW=149MiB/s (156MB/s)(298MiB/2006msec) 00:24:37.530 slat (usec): min=3, max=110, avg= 3.64, stdev= 1.63 00:24:37.530 clat (usec): min=1712, max=14349, avg=8158.80, stdev=1850.69 00:24:37.530 lat (usec): min=1716, max=14353, avg=8162.44, stdev=1850.83 00:24:37.530 clat percentiles (usec): 00:24:37.530 | 1.00th=[ 4178], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6521], 00:24:37.530 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8586], 00:24:37.530 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11207], 00:24:37.530 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13698], 99.95th=[14091], 00:24:37.530 | 99.99th=[14222] 00:24:37.530 bw ( KiB/s): min=71968, max=80352, per=49.76%, avg=75808.00, stdev=3641.82, samples=4 00:24:37.530 iops : min= 4498, max= 5022, avg=4738.00, stdev=227.61, samples=4 00:24:37.530 write: IOPS=5453, BW=85.2MiB/s (89.4MB/s)(155MiB/1813msec); 0 zone resets 00:24:37.530 slat (usec): min=39, max=460, avg=41.06, stdev= 8.82 00:24:37.530 clat (usec): min=1847, max=16158, avg=9150.67, stdev=1396.11 00:24:37.530 lat (usec): min=1887, max=16290, avg=9191.73, stdev=1398.48 00:24:37.530 clat percentiles (usec): 00:24:37.530 | 1.00th=[ 6259], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7963], 00:24:37.530 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:24:37.530 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11469], 00:24:37.530 | 99.00th=[12387], 99.50th=[12911], 99.90th=[15795], 99.95th=[15926], 00:24:37.530 | 99.99th=[16188] 00:24:37.530 bw ( KiB/s): min=74944, max=83680, per=90.48%, avg=78952.00, stdev=3680.82, samples=4 00:24:37.530 iops : min= 4684, max= 5230, avg=4934.50, stdev=230.05, samples=4 00:24:37.530 lat (msec) : 2=0.04%, 4=0.50%, 10=79.69%, 20=19.77% 00:24:37.530 cpu : usr=84.34%, sys=14.31%, ctx=12, majf=0, minf=22 00:24:37.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:37.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:37.530 issued rwts: total=19102,9888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:37.530 00:24:37.530 Run status group 0 (all jobs): 00:24:37.530 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=298MiB (313MB), run=2006-2006msec 00:24:37.530 WRITE: bw=85.2MiB/s (89.4MB/s), 85.2MiB/s-85.2MiB/s (89.4MB/s-89.4MB/s), io=155MiB (162MB), run=1813-1813msec 00:24:37.530 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.790 rmmod nvme_tcp 00:24:37.790 rmmod nvme_fabrics 00:24:37.790 rmmod nvme_keyring 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1120834 ']' 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1120834 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1120834 ']' 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1120834 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.790 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1120834 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1120834' 00:24:38.051 killing process with pid 1120834 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1120834 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1120834 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.051 11:58:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.595 00:24:40.595 real 0m18.142s 00:24:40.595 user 1m4.120s 00:24:40.595 sys 0m7.717s 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.595 ************************************ 00:24:40.595 END TEST nvmf_fio_host 00:24:40.595 ************************************ 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.595 ************************************ 00:24:40.595 START TEST nvmf_failover 00:24:40.595 ************************************ 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:40.595 * Looking for test storage... 00:24:40.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.595 --rc genhtml_branch_coverage=1 00:24:40.595 --rc genhtml_function_coverage=1 00:24:40.595 --rc genhtml_legend=1 00:24:40.595 --rc geninfo_all_blocks=1 00:24:40.595 --rc geninfo_unexecuted_blocks=1 00:24:40.595 00:24:40.595 ' 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.595 --rc genhtml_branch_coverage=1 00:24:40.595 --rc genhtml_function_coverage=1 00:24:40.595 --rc genhtml_legend=1 00:24:40.595 --rc geninfo_all_blocks=1 00:24:40.595 --rc geninfo_unexecuted_blocks=1 00:24:40.595 00:24:40.595 ' 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.595 --rc genhtml_branch_coverage=1 00:24:40.595 --rc genhtml_function_coverage=1 00:24:40.595 --rc genhtml_legend=1 00:24:40.595 --rc geninfo_all_blocks=1 00:24:40.595 --rc geninfo_unexecuted_blocks=1 00:24:40.595 00:24:40.595 ' 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:40.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.595 --rc genhtml_branch_coverage=1 00:24:40.595 --rc genhtml_function_coverage=1 00:24:40.595 --rc genhtml_legend=1 00:24:40.595 --rc geninfo_all_blocks=1 00:24:40.595 --rc geninfo_unexecuted_blocks=1 00:24:40.595 00:24:40.595 ' 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.595 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.596 11:58:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:48.738 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:48.739 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:48.739 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:48.739 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:48.739 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:24:48.739 00:24:48.739 --- 10.0.0.2 ping statistics --- 00:24:48.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.739 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:24:48.739 00:24:48.739 --- 10.0.0.1 ping statistics --- 00:24:48.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.739 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1126962 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1126962 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1126962 ']' 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:48.739 11:58:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.739 [2024-10-11 11:58:32.489155] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:24:48.739 [2024-10-11 11:58:32.489220] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.740 [2024-10-11 11:58:32.577603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:48.740 [2024-10-11 11:58:32.629366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.740 [2024-10-11 11:58:32.629414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.740 [2024-10-11 11:58:32.629423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.740 [2024-10-11 11:58:32.629430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.740 [2024-10-11 11:58:32.629436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.740 [2024-10-11 11:58:32.631335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.740 [2024-10-11 11:58:32.631498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.740 [2024-10-11 11:58:32.631499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.740 11:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:48.740 11:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:48.740 11:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:48.740 11:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:48.740 11:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.740 11:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.740 11:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:49.001 [2024-10-11 11:58:33.527946] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.001 11:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:49.263 Malloc0 00:24:49.263 11:58:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:49.525 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:49.786 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.786 [2024-10-11 11:58:34.411948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.047 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:50.047 [2024-10-11 11:58:34.608604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:50.047 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:50.308 [2024-10-11 11:58:34.805323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1127328 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1127328 /var/tmp/bdevperf.sock 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1127328 ']' 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.309 11:58:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:51.252 11:58:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.252 11:58:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:51.252 11:58:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:51.512 NVMe0n1 00:24:51.512 11:58:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:51.773 00:24:51.773 11:58:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.773 11:58:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1127665 00:24:51.773 11:58:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:52.714 11:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.975 [2024-10-11 11:58:37.481564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f450 is same with the state(6) to be set 00:24:52.975 [2024-10-11 11:58:37.481608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f450 is same with the state(6) to be set 00:24:52.975 [2024-10-11 11:58:37.481614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f450 is same with the state(6) to be set 00:24:52.975 [2024-10-11 11:58:37.481619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f450 is same with the state(6) to be set 00:24:52.975 [2024-10-11 11:58:37.481624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190f450 is same with the state(6) to be set 00:24:52.975 11:58:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:56.271 11:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:56.271 00:24:56.532 11:58:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:56.532 [2024-10-11 11:58:41.067089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.532 [2024-10-11 11:58:41.067397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 [2024-10-11 11:58:41.067701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1910200 is same with the state(6) to be set 00:24:56.533 11:58:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:59.829 11:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.829 [2024-10-11 11:58:44.258748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.829 11:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:00.769 11:58:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:01.031 [2024-10-11 11:58:45.451121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.031 [2024-10-11 11:58:45.451513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 [2024-10-11 11:58:45.451737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1911150 is same with the state(6) to be set 00:25:01.032 11:58:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1127665 00:25:07.632 { 00:25:07.632 "results": [ 00:25:07.632 { 00:25:07.632 "job": "NVMe0n1", 00:25:07.632 "core_mask": "0x1", 00:25:07.632 "workload": "verify", 00:25:07.632 "status": "finished", 00:25:07.632 "verify_range": { 00:25:07.632 "start": 0, 00:25:07.632 "length": 16384 00:25:07.632 }, 00:25:07.632 "queue_depth": 128, 00:25:07.632 "io_size": 4096, 00:25:07.632 "runtime": 15.003693, 00:25:07.632 "iops": 12392.548954447415, 00:25:07.632 "mibps": 48.408394353310214, 00:25:07.632 "io_failed": 9181, 00:25:07.632 "io_timeout": 0, 00:25:07.632 "avg_latency_us": 9821.589422443174, 00:25:07.632 "min_latency_us": 546.1333333333333, 00:25:07.632 "max_latency_us": 32768.0 00:25:07.632 } 00:25:07.632 ], 00:25:07.632 "core_count": 1 00:25:07.632 } 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1127328 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1127328 ']' 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1127328 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1127328 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1127328' 00:25:07.632 killing process with pid 1127328 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1127328 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1127328 00:25:07.632 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:07.632 [2024-10-11 11:58:34.886074] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:07.632 [2024-10-11 11:58:34.886150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127328 ] 00:25:07.632 [2024-10-11 11:58:34.970920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.632 [2024-10-11 11:58:35.018583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.632 Running I/O for 15 seconds... 00:25:07.632 10925.00 IOPS, 42.68 MiB/s [2024-10-11T09:58:52.264Z] [2024-10-11 11:58:37.484211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.632 [2024-10-11 11:58:37.484245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.632 [2024-10-11 11:58:37.484271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.632 [2024-10-11 11:58:37.484288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.632 [2024-10-11 11:58:37.484306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.632 [2024-10-11 11:58:37.484323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.632 [2024-10-11 11:58:37.484340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.632 [2024-10-11 11:58:37.484357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.632 [2024-10-11 11:58:37.484374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.632 [2024-10-11 11:58:37.484581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.632 [2024-10-11 11:58:37.484588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.484984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.484993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.633 [2024-10-11 11:58:37.485224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.633 [2024-10-11 11:58:37.485233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.634 [2024-10-11 11:58:37.485465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.485495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95176 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.485503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.634 [2024-10-11 11:58:37.485550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.634 [2024-10-11 11:58:37.485565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.634 [2024-10-11 11:58:37.485584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.634 [2024-10-11 11:58:37.485599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248a270 is same with the state(6) to be set 00:25:07.634 [2024-10-11 11:58:37.485775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.485782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.485789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95184 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.485797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.485811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.485818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95192 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.485825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.485838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.485844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95200 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.485851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.485864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.485870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95208 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.485878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.485890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.485896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95216 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.485904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.485917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.485926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95224 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.485933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.485948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.485955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95232 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.485963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.485976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.485982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95240 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.485989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.485997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.486002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.486008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95248 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.486016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.486023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.486028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.486034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95256 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.486042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.634 [2024-10-11 11:58:37.486049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.634 [2024-10-11 11:58:37.486055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.634 [2024-10-11 11:58:37.486061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95264 len:8 PRP1 0x0 PRP2 0x0 00:25:07.634 [2024-10-11 11:58:37.486068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95272 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95280 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95288 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95296 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95304 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95312 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95320 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95328 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95336 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95344 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95352 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95360 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95368 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95376 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95384 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95392 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95400 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95408 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95416 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95424 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95432 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.635 [2024-10-11 11:58:37.486641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.635 [2024-10-11 11:58:37.486646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.635 [2024-10-11 11:58:37.486652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95440 len:8 PRP1 0x0 PRP2 0x0 00:25:07.635 [2024-10-11 11:58:37.486659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.486670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.486675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.486682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95448 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.486689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.486696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.486702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.486708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95456 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.486715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.486723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.486728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.486734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95464 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.486741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.486749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.486755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.486761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95472 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.486768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.486777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.486783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.486789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.486796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.486804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.486809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.486816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95488 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.486823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.486830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.486835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.486841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95496 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.486849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.486857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.486862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.486868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95504 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.486875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.486883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95512 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95520 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95528 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95536 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95544 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95552 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95560 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95568 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95576 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.636 [2024-10-11 11:58:37.497889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95584 len:8 PRP1 0x0 PRP2 0x0 00:25:07.636 [2024-10-11 11:58:37.497896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.636 [2024-10-11 11:58:37.497904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.636 [2024-10-11 11:58:37.497909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.497915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95592 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.497922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.497930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.497938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.497944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95600 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.497951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.497959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.497964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.497970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95608 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.497977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.497985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.497990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.497996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95616 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95624 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94608 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94616 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94624 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94632 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94640 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94648 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94656 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94664 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94672 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94680 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94688 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94696 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94704 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94712 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94720 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94728 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.637 [2024-10-11 11:58:37.498475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94736 len:8 PRP1 0x0 PRP2 0x0 00:25:07.637 [2024-10-11 11:58:37.498482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.637 [2024-10-11 11:58:37.498490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.637 [2024-10-11 11:58:37.498495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94744 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94752 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94760 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94768 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94776 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94784 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94792 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94800 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94808 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94816 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94824 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94832 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94840 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94848 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94856 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94864 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94872 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94880 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.498978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.498984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94888 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.498991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.498999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.499005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.499011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94896 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.499018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.499025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.499031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.499037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94904 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.499044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.499052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.499057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.638 [2024-10-11 11:58:37.499063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94912 len:8 PRP1 0x0 PRP2 0x0 00:25:07.638 [2024-10-11 11:58:37.499070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.638 [2024-10-11 11:58:37.499078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.638 [2024-10-11 11:58:37.499083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.499089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94920 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.499096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.499104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.499109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.499115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94928 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.499122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.499134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.499140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.499146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94936 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.499153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.499161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.499166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.499172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94944 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.499181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.499189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.499195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.499201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94952 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.499208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.499215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.499221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.499227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94960 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.499234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.499242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.499247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.506911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94968 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.506939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.506952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.506960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.506968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94976 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.506977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.506986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.506991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.506998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94984 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94992 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95000 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95008 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95016 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95024 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95032 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95040 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95056 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95064 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95072 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95080 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.639 [2024-10-11 11:58:37.507346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.639 [2024-10-11 11:58:37.507352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95088 len:8 PRP1 0x0 PRP2 0x0 00:25:07.639 [2024-10-11 11:58:37.507359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.639 [2024-10-11 11:58:37.507367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95096 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95104 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95112 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95120 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95128 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95136 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95144 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95152 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95160 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95168 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.640 [2024-10-11 11:58:37.507639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.640 [2024-10-11 11:58:37.507645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95176 len:8 PRP1 0x0 PRP2 0x0 00:25:07.640 [2024-10-11 11:58:37.507652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:37.507698] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24ab370 was disconnected and freed. reset controller. 00:25:07.640 [2024-10-11 11:58:37.507709] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:07.640 [2024-10-11 11:58:37.507717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.640 [2024-10-11 11:58:37.507764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248a270 (9): Bad file descriptor 00:25:07.640 [2024-10-11 11:58:37.512142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.640 [2024-10-11 11:58:37.561878] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.640 10689.50 IOPS, 41.76 MiB/s [2024-10-11T09:58:52.272Z] 10799.67 IOPS, 42.19 MiB/s [2024-10-11T09:58:52.272Z] 11191.25 IOPS, 43.72 MiB/s [2024-10-11T09:58:52.272Z] [2024-10-11 11:58:41.068760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.640 [2024-10-11 11:58:41.068985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.640 [2024-10-11 11:58:41.068990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.068997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.641 [2024-10-11 11:58:41.069370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.641 [2024-10-11 11:58:41.069375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.642 [2024-10-11 11:58:41.069387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.642 [2024-10-11 11:58:41.069399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.642 [2024-10-11 11:58:41.069410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.642 [2024-10-11 11:58:41.069422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.642 [2024-10-11 11:58:41.069433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.642 [2024-10-11 11:58:41.069445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.642 [2024-10-11 11:58:41.069457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.642 [2024-10-11 11:58:41.069826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.642 [2024-10-11 11:58:41.069833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.643 [2024-10-11 11:58:41.069839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.643 [2024-10-11 11:58:41.069850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.643 [2024-10-11 11:58:41.069862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.643 [2024-10-11 11:58:41.069873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.643 [2024-10-11 11:58:41.069884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.643 [2024-10-11 11:58:41.069896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.643 [2024-10-11 11:58:41.069908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.069928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45256 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.069934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.069946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.069950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45264 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.069956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.069965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.069970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45272 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.069975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.069980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.069984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.069988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45280 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.069995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45288 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45296 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45304 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45312 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45320 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45328 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45336 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45344 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45352 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45360 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45368 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45376 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45384 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45392 len:8 PRP1 0x0 PRP2 0x0 00:25:07.643 [2024-10-11 11:58:41.070260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.643 [2024-10-11 11:58:41.070265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.643 [2024-10-11 11:58:41.070269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.643 [2024-10-11 11:58:41.070273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45400 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.070278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.070285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.070289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.070293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45408 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.070298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.070304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.070308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.070312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45416 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.070318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.070327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.070331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.070335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45424 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.070340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.070345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.070350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45432 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45440 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45448 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45456 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45464 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45472 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45480 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45488 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45496 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45504 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.644 [2024-10-11 11:58:41.085499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.644 [2024-10-11 11:58:41.085504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45512 len:8 PRP1 0x0 PRP2 0x0 00:25:07.644 [2024-10-11 11:58:41.085509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085545] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24ad420 was disconnected and freed. reset controller. 00:25:07.644 [2024-10-11 11:58:41.085553] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:07.644 [2024-10-11 11:58:41.085574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.644 [2024-10-11 11:58:41.085581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.644 [2024-10-11 11:58:41.085595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.644 [2024-10-11 11:58:41.085605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.644 [2024-10-11 11:58:41.085616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:41.085621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.644 [2024-10-11 11:58:41.085653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248a270 (9): Bad file descriptor 00:25:07.644 [2024-10-11 11:58:41.089040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.644 [2024-10-11 11:58:41.122185] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.644 11394.40 IOPS, 44.51 MiB/s [2024-10-11T09:58:52.276Z] 11660.83 IOPS, 45.55 MiB/s [2024-10-11T09:58:52.276Z] 11851.14 IOPS, 46.29 MiB/s [2024-10-11T09:58:52.276Z] 11979.12 IOPS, 46.79 MiB/s [2024-10-11T09:58:52.276Z] 12110.44 IOPS, 47.31 MiB/s [2024-10-11T09:58:52.276Z] [2024-10-11 11:58:45.452194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.644 [2024-10-11 11:58:45.452222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:45.452234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.644 [2024-10-11 11:58:45.452240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:45.452248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.644 [2024-10-11 11:58:45.452253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:45.452260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.644 [2024-10-11 11:58:45.452265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:45.452272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.644 [2024-10-11 11:58:45.452277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.644 [2024-10-11 11:58:45.452283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.645 [2024-10-11 11:58:45.452741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.645 [2024-10-11 11:58:45.452746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.452911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.646 [2024-10-11 11:58:45.452924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.646 [2024-10-11 11:58:45.452936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.646 [2024-10-11 11:58:45.452948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.646 [2024-10-11 11:58:45.452963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.646 [2024-10-11 11:58:45.452975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.646 [2024-10-11 11:58:45.452987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.452993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.646 [2024-10-11 11:58:45.452999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.646 [2024-10-11 11:58:45.453145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.646 [2024-10-11 11:58:45.453150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115352 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.647 [2024-10-11 11:58:45.453213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.647 [2024-10-11 11:58:45.453224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.647 [2024-10-11 11:58:45.453235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.647 [2024-10-11 11:58:45.453246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248a270 is same with the state(6) to be set 00:25:07.647 [2024-10-11 11:58:45.453377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115360 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115368 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115376 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115384 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115392 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115400 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115408 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115416 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115424 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114536 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114544 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114552 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114560 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114568 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114576 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114584 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114592 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115432 len:8 PRP1 0x0 PRP2 0x0 00:25:07.647 [2024-10-11 11:58:45.453718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.647 [2024-10-11 11:58:45.453723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.647 [2024-10-11 11:58:45.453727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.647 [2024-10-11 11:58:45.453731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115440 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.453736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.453742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.453745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.453749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115448 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.453755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.453760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.453764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.453770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115456 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.453775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.453780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.453784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.453788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115464 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.453793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.453799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.453803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.453807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115472 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.453812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.453819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.453823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.453827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115480 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.453832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.453837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.453841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.453846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115488 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.453851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.453856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.453860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.453864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114600 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.453869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.453874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.453878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.453882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114608 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.453887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.453893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.453897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.453901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114616 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.467894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.467923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.467929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.467936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114624 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.467942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.467947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.467951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.467956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114632 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.467961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.467967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.467971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.467975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114640 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.467984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.467990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.467994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.467998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114648 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.468003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.468008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.468012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.468017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114656 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.468022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.468027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.468031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.468035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114664 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.468040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.468045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.468049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.468053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114672 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.468058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.468063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.468068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.468072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114680 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.468077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.468082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.468086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.468092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.468097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.468102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.468106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.468110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114696 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.468116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.468121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.468125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.468131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114704 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.468136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.468141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.648 [2024-10-11 11:58:45.468145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.648 [2024-10-11 11:58:45.468150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114712 len:8 PRP1 0x0 PRP2 0x0 00:25:07.648 [2024-10-11 11:58:45.468155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.648 [2024-10-11 11:58:45.468160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114720 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114728 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114736 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114744 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114752 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114760 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114768 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114776 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114784 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114472 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114792 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114800 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114808 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114816 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114824 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114832 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114840 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114848 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114856 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114864 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114872 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.649 [2024-10-11 11:58:45.468556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114880 len:8 PRP1 0x0 PRP2 0x0 00:25:07.649 [2024-10-11 11:58:45.468561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.649 [2024-10-11 11:58:45.468566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.649 [2024-10-11 11:58:45.468570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114888 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114896 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114904 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114912 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114920 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114928 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114936 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114944 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114952 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114960 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114968 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114976 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114984 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114992 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115000 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115008 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115016 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115024 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115032 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115040 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115048 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.650 [2024-10-11 11:58:45.468962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.650 [2024-10-11 11:58:45.468966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.650 [2024-10-11 11:58:45.468970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115056 len:8 PRP1 0x0 PRP2 0x0 00:25:07.650 [2024-10-11 11:58:45.468975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.468980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.468984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.468988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115064 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.468995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.469000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.469003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.469008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115072 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.469013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.469018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.469022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.469028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115080 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.469033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.469038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.469042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.469047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115088 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.469052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.469057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115096 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115104 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115112 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115120 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115128 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115136 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115144 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115152 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115160 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115168 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115176 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115184 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115192 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115200 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.651 [2024-10-11 11:58:45.476476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115208 len:8 PRP1 0x0 PRP2 0x0 00:25:07.651 [2024-10-11 11:58:45.476483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.651 [2024-10-11 11:58:45.476490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.651 [2024-10-11 11:58:45.476495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115216 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115224 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115232 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115240 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114480 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114488 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114496 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114504 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114512 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114520 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114528 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115248 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115256 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115264 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115272 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115280 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115288 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115296 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115304 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115312 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.476979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.476986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.476993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.476999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115320 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.477005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.477012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.477017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.652 [2024-10-11 11:58:45.477023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115328 len:8 PRP1 0x0 PRP2 0x0 00:25:07.652 [2024-10-11 11:58:45.477029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.652 [2024-10-11 11:58:45.477036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.652 [2024-10-11 11:58:45.477041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.653 [2024-10-11 11:58:45.477047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115336 len:8 PRP1 0x0 PRP2 0x0 00:25:07.653 [2024-10-11 11:58:45.477054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.653 [2024-10-11 11:58:45.477061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.653 [2024-10-11 11:58:45.477065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.653 [2024-10-11 11:58:45.477071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115344 len:8 PRP1 0x0 PRP2 0x0 00:25:07.653 [2024-10-11 11:58:45.477078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.653 [2024-10-11 11:58:45.477085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.653 [2024-10-11 11:58:45.477090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.653 [2024-10-11 11:58:45.477095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115352 len:8 PRP1 0x0 PRP2 0x0 00:25:07.653 [2024-10-11 11:58:45.477102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.653 [2024-10-11 11:58:45.477142] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24b7700 was disconnected and freed. reset controller. 00:25:07.653 [2024-10-11 11:58:45.477151] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:07.653 [2024-10-11 11:58:45.477159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.653 [2024-10-11 11:58:45.477202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248a270 (9): Bad file descriptor 00:25:07.653 [2024-10-11 11:58:45.480878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.653 [2024-10-11 11:58:45.595731] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.653 12012.20 IOPS, 46.92 MiB/s [2024-10-11T09:58:52.285Z] 12098.27 IOPS, 47.26 MiB/s [2024-10-11T09:58:52.285Z] 12193.33 IOPS, 47.63 MiB/s [2024-10-11T09:58:52.285Z] 12264.62 IOPS, 47.91 MiB/s [2024-10-11T09:58:52.285Z] 12334.57 IOPS, 48.18 MiB/s 00:25:07.653 Latency(us) 00:25:07.653 [2024-10-11T09:58:52.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.653 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:07.653 Verification LBA range: start 0x0 length 0x4000 00:25:07.653 NVMe0n1 : 15.00 12392.55 48.41 611.92 0.00 9821.59 546.13 32768.00 00:25:07.653 [2024-10-11T09:58:52.285Z] =================================================================================================================== 00:25:07.653 [2024-10-11T09:58:52.285Z] Total : 12392.55 48.41 611.92 0.00 9821.59 546.13 32768.00 00:25:07.653 Received shutdown signal, test time was about 15.000000 seconds 00:25:07.653 00:25:07.653 Latency(us) 00:25:07.653 [2024-10-11T09:58:52.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.653 [2024-10-11T09:58:52.285Z] =================================================================================================================== 00:25:07.653 [2024-10-11T09:58:52.285Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1130675 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1130675 /var/tmp/bdevperf.sock 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1130675 ']' 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.653 11:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.914 11:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.914 11:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:07.914 11:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:08.175 [2024-10-11 11:58:52.651701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:08.175 11:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:08.436 [2024-10-11 11:58:52.832122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:08.436 11:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:08.697 NVMe0n1 00:25:08.697 11:58:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:08.958 00:25:08.958 11:58:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:09.219 00:25:09.219 11:58:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:09.219 11:58:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:09.219 11:58:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:09.480 11:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:12.808 11:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.808 11:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:12.808 11:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.808 11:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1131699 00:25:12.808 11:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1131699 00:25:13.751 { 00:25:13.751 "results": [ 00:25:13.751 { 00:25:13.751 "job": "NVMe0n1", 00:25:13.751 "core_mask": "0x1", 00:25:13.751 "workload": "verify", 00:25:13.751 "status": "finished", 00:25:13.751 "verify_range": { 00:25:13.751 "start": 0, 00:25:13.751 "length": 16384 00:25:13.751 }, 00:25:13.751 "queue_depth": 128, 00:25:13.751 "io_size": 4096, 00:25:13.751 "runtime": 1.004251, 00:25:13.751 "iops": 12932.025957654012, 00:25:13.751 "mibps": 50.515726397085984, 00:25:13.751 "io_failed": 0, 00:25:13.751 "io_timeout": 0, 00:25:13.751 "avg_latency_us": 9861.220268473602, 00:25:13.751 "min_latency_us": 826.0266666666666, 00:25:13.751 "max_latency_us": 8792.746666666666 00:25:13.751 } 00:25:13.751 ], 00:25:13.751 "core_count": 1 00:25:13.751 } 00:25:13.751 11:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:13.751 [2024-10-11 11:58:51.699097] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:13.751 [2024-10-11 11:58:51.699157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130675 ] 00:25:13.751 [2024-10-11 11:58:51.776974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.751 [2024-10-11 11:58:51.806500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.751 [2024-10-11 11:58:53.980877] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:13.751 [2024-10-11 11:58:53.980913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.751 [2024-10-11 11:58:53.980922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.751 [2024-10-11 11:58:53.980929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.751 [2024-10-11 11:58:53.980935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.751 [2024-10-11 11:58:53.980941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.751 [2024-10-11 11:58:53.980946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.751 [2024-10-11 11:58:53.980951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.751 [2024-10-11 11:58:53.980956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.751 [2024-10-11 11:58:53.980965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.751 [2024-10-11 11:58:53.980987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.751 [2024-10-11 11:58:53.980998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ab270 (9): Bad file descriptor 00:25:13.751 [2024-10-11 11:58:53.992083] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:13.751 Running I/O for 1 seconds... 00:25:13.751 12859.00 IOPS, 50.23 MiB/s 00:25:13.751 Latency(us) 00:25:13.751 [2024-10-11T09:58:58.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.751 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:13.751 Verification LBA range: start 0x0 length 0x4000 00:25:13.751 NVMe0n1 : 1.00 12932.03 50.52 0.00 0.00 9861.22 826.03 8792.75 00:25:13.751 [2024-10-11T09:58:58.383Z] =================================================================================================================== 00:25:13.751 [2024-10-11T09:58:58.383Z] Total : 12932.03 50.52 0.00 0.00 9861.22 826.03 8792.75 00:25:13.751 11:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.751 11:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:14.012 11:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.272 11:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:14.272 11:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:14.272 11:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.533 11:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1130675 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1130675 ']' 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1130675 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1130675 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1130675' 00:25:17.837 killing process with pid 1130675 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1130675 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1130675 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:17.837 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.097 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:18.097 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.098 rmmod nvme_tcp 00:25:18.098 rmmod nvme_fabrics 00:25:18.098 rmmod nvme_keyring 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1126962 ']' 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1126962 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1126962 ']' 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1126962 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1126962 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1126962' 00:25:18.098 killing process with pid 1126962 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1126962 00:25:18.098 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1126962 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.359 11:59:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.907 11:59:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.907 00:25:20.907 real 0m40.204s 00:25:20.907 user 2m3.585s 00:25:20.907 sys 0m8.679s 00:25:20.907 11:59:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:20.907 11:59:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.907 ************************************ 00:25:20.907 END TEST nvmf_failover 00:25:20.907 ************************************ 00:25:20.907 11:59:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:20.907 11:59:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:20.907 11:59:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:20.907 11:59:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.907 ************************************ 00:25:20.907 START TEST nvmf_host_discovery 00:25:20.907 ************************************ 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:20.907 * Looking for test storage... 00:25:20.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.907 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:20.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.908 --rc genhtml_branch_coverage=1 00:25:20.908 --rc genhtml_function_coverage=1 00:25:20.908 --rc genhtml_legend=1 00:25:20.908 --rc geninfo_all_blocks=1 00:25:20.908 --rc geninfo_unexecuted_blocks=1 00:25:20.908 00:25:20.908 ' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:20.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.908 --rc genhtml_branch_coverage=1 00:25:20.908 --rc genhtml_function_coverage=1 00:25:20.908 --rc genhtml_legend=1 00:25:20.908 --rc geninfo_all_blocks=1 00:25:20.908 --rc geninfo_unexecuted_blocks=1 00:25:20.908 00:25:20.908 ' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:20.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.908 --rc genhtml_branch_coverage=1 00:25:20.908 --rc genhtml_function_coverage=1 00:25:20.908 --rc genhtml_legend=1 00:25:20.908 --rc geninfo_all_blocks=1 00:25:20.908 --rc geninfo_unexecuted_blocks=1 00:25:20.908 00:25:20.908 ' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:20.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.908 --rc genhtml_branch_coverage=1 00:25:20.908 --rc genhtml_function_coverage=1 00:25:20.908 --rc genhtml_legend=1 00:25:20.908 --rc geninfo_all_blocks=1 00:25:20.908 --rc geninfo_unexecuted_blocks=1 00:25:20.908 00:25:20.908 ' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.908 11:59:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:29.065 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:29.065 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.065 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:29.066 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:29.066 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:29.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:25:29.066 00:25:29.066 --- 10.0.0.2 ping statistics --- 00:25:29.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.066 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:25:29.066 00:25:29.066 --- 10.0.0.1 ping statistics --- 00:25:29.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.066 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1137023 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1137023 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1137023 ']' 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:29.066 11:59:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.066 [2024-10-11 11:59:12.760222] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:29.066 [2024-10-11 11:59:12.760288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.066 [2024-10-11 11:59:12.850181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.066 [2024-10-11 11:59:12.900587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.066 [2024-10-11 11:59:12.900639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.066 [2024-10-11 11:59:12.900647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.066 [2024-10-11 11:59:12.900655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.066 [2024-10-11 11:59:12.900661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.066 [2024-10-11 11:59:12.901416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.066 [2024-10-11 11:59:13.634504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.066 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.066 [2024-10-11 11:59:13.646803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.067 null0 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.067 null1 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1137068 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1137068 /tmp/host.sock 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1137068 ']' 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:29.067 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:29.067 11:59:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.327 [2024-10-11 11:59:13.743997] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:29.327 [2024-10-11 11:59:13.744058] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137068 ] 00:25:29.327 [2024-10-11 11:59:13.825724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.327 [2024-10-11 11:59:13.879793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.268 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:30.268 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:30.268 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.268 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:30.268 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 [2024-10-11 11:59:14.881982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.269 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.530 11:59:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:30.530 11:59:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:31.101 [2024-10-11 11:59:15.594157] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:31.101 [2024-10-11 11:59:15.594194] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:31.101 [2024-10-11 11:59:15.594209] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:31.101 [2024-10-11 11:59:15.682457] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:31.362 [2024-10-11 11:59:15.910315] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:31.362 [2024-10-11 11:59:15.910353] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.622 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:31.623 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.883 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.884 [2024-10-11 11:59:16.434096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:31.884 [2024-10-11 11:59:16.434244] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:31.884 [2024-10-11 11:59:16.434281] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.884 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.144 [2024-10-11 11:59:16.522518] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:32.144 11:59:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:32.144 [2024-10-11 11:59:16.629727] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:32.144 [2024-10-11 11:59:16.629762] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:32.145 [2024-10-11 11:59:16.629769] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.085 [2024-10-11 11:59:17.710434] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:33.085 [2024-10-11 11:59:17.710452] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:33.085 [2024-10-11 11:59:17.711520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.085 [2024-10-11 11:59:17.711533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.085 [2024-10-11 11:59:17.711539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.085 [2024-10-11 11:59:17.711545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.085 [2024-10-11 11:59:17.711550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.085 [2024-10-11 11:59:17.711556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.085 [2024-10-11 11:59:17.711561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.085 [2024-10-11 11:59:17.711567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.085 [2024-10-11 11:59:17.711572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6310 is same with the state(6) to be set 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.085 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:33.347 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:33.347 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.347 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.347 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.347 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.347 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.347 [2024-10-11 11:59:17.721535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a6310 (9): Bad file descriptor 00:25:33.347 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.348 [2024-10-11 11:59:17.731570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:33.348 [2024-10-11 11:59:17.732038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.348 [2024-10-11 11:59:17.732068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a6310 with addr=10.0.0.2, port=4420 00:25:33.348 [2024-10-11 11:59:17.732077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6310 is same with the state(6) to be set 00:25:33.348 [2024-10-11 11:59:17.732096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a6310 (9): Bad file descriptor 00:25:33.348 [2024-10-11 11:59:17.732104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:33.348 [2024-10-11 11:59:17.732109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:33.348 [2024-10-11 11:59:17.732116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:33.348 [2024-10-11 11:59:17.732128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.348 [2024-10-11 11:59:17.741620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:33.348 [2024-10-11 11:59:17.741955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.348 [2024-10-11 11:59:17.741965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a6310 with addr=10.0.0.2, port=4420 00:25:33.348 [2024-10-11 11:59:17.741971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6310 is same with the state(6) to be set 00:25:33.348 [2024-10-11 11:59:17.741979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a6310 (9): Bad file descriptor 00:25:33.348 [2024-10-11 11:59:17.741986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:33.348 [2024-10-11 11:59:17.741991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:33.348 [2024-10-11 11:59:17.741996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:33.348 [2024-10-11 11:59:17.742003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.348 [2024-10-11 11:59:17.751672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:33.348 [2024-10-11 11:59:17.751991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.348 [2024-10-11 11:59:17.751999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a6310 with addr=10.0.0.2, port=4420 00:25:33.348 [2024-10-11 11:59:17.752004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6310 is same with the state(6) to be set 00:25:33.348 [2024-10-11 11:59:17.752012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a6310 (9): Bad file descriptor 00:25:33.348 [2024-10-11 11:59:17.752019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:33.348 [2024-10-11 11:59:17.752023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:33.348 [2024-10-11 11:59:17.752028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:33.348 [2024-10-11 11:59:17.752036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.348 [2024-10-11 11:59:17.761716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:33.348 [2024-10-11 11:59:17.762026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.348 [2024-10-11 11:59:17.762037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a6310 with addr=10.0.0.2, port=4420 00:25:33.348 [2024-10-11 11:59:17.762043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6310 is same with the state(6) to be set 00:25:33.348 [2024-10-11 11:59:17.762052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a6310 (9): Bad file descriptor 00:25:33.348 [2024-10-11 11:59:17.762062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:33.348 [2024-10-11 11:59:17.762070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:33.348 [2024-10-11 11:59:17.762076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:33.348 [2024-10-11 11:59:17.762083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:33.348 [2024-10-11 11:59:17.771765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:33.348 [2024-10-11 11:59:17.772094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.348 [2024-10-11 11:59:17.772103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a6310 with addr=10.0.0.2, port=4420 00:25:33.348 [2024-10-11 11:59:17.772108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6310 is same with the state(6) to be set 00:25:33.348 [2024-10-11 11:59:17.772116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a6310 (9): Bad file descriptor 00:25:33.348 [2024-10-11 11:59:17.772123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:33.348 [2024-10-11 11:59:17.772127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:33.348 [2024-10-11 11:59:17.772132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:33.348 [2024-10-11 11:59:17.772139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.348 [2024-10-11 11:59:17.781809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:33.348 [2024-10-11 11:59:17.782001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.348 [2024-10-11 11:59:17.782012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a6310 with addr=10.0.0.2, port=4420 00:25:33.348 [2024-10-11 11:59:17.782017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6310 is same with the state(6) to be set 00:25:33.348 [2024-10-11 11:59:17.782025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a6310 (9): Bad file descriptor 00:25:33.348 [2024-10-11 11:59:17.782037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:33.348 [2024-10-11 11:59:17.782042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:33.348 [2024-10-11 11:59:17.782050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:33.348 [2024-10-11 11:59:17.782058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.348 [2024-10-11 11:59:17.791857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:33.348 [2024-10-11 11:59:17.792118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.348 [2024-10-11 11:59:17.792127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a6310 with addr=10.0.0.2, port=4420 00:25:33.348 [2024-10-11 11:59:17.792132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a6310 is same with the state(6) to be set 00:25:33.348 [2024-10-11 11:59:17.792139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a6310 (9): Bad file descriptor 00:25:33.348 [2024-10-11 11:59:17.792152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:33.348 [2024-10-11 11:59:17.792157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:33.348 [2024-10-11 11:59:17.792162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:33.348 [2024-10-11 11:59:17.792169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.348 [2024-10-11 11:59:17.799174] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:33.348 [2024-10-11 11:59:17.799187] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:33.348 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:33.349 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.610 11:59:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.610 11:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.552 [2024-10-11 11:59:19.142868] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:34.552 [2024-10-11 11:59:19.142882] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:34.552 [2024-10-11 11:59:19.142891] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:34.812 [2024-10-11 11:59:19.231140] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:35.072 [2024-10-11 11:59:19.500221] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:35.072 [2024-10-11 11:59:19.500245] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.072 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.072 request: 00:25:35.072 { 00:25:35.072 "name": "nvme", 00:25:35.072 "trtype": "tcp", 00:25:35.072 "traddr": "10.0.0.2", 00:25:35.072 "adrfam": "ipv4", 00:25:35.072 "trsvcid": "8009", 00:25:35.072 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:35.072 "wait_for_attach": true, 00:25:35.072 "method": "bdev_nvme_start_discovery", 00:25:35.072 "req_id": 1 00:25:35.072 } 00:25:35.072 Got JSON-RPC error response 00:25:35.072 response: 00:25:35.073 { 00:25:35.073 "code": -17, 00:25:35.073 "message": "File exists" 00:25:35.073 } 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.073 request: 00:25:35.073 { 00:25:35.073 "name": "nvme_second", 00:25:35.073 "trtype": "tcp", 00:25:35.073 "traddr": "10.0.0.2", 00:25:35.073 "adrfam": "ipv4", 00:25:35.073 "trsvcid": "8009", 00:25:35.073 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:35.073 "wait_for_attach": true, 00:25:35.073 "method": "bdev_nvme_start_discovery", 00:25:35.073 "req_id": 1 00:25:35.073 } 00:25:35.073 Got JSON-RPC error response 00:25:35.073 response: 00:25:35.073 { 00:25:35.073 "code": -17, 00:25:35.073 "message": "File exists" 00:25:35.073 } 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.073 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.333 11:59:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:36.274 [2024-10-11 11:59:20.759532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.274 [2024-10-11 11:59:20.759566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e0080 with addr=10.0.0.2, port=8010 00:25:36.274 [2024-10-11 11:59:20.759579] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:36.274 [2024-10-11 11:59:20.759584] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:36.274 [2024-10-11 11:59:20.759590] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:37.215 [2024-10-11 11:59:21.761920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.215 [2024-10-11 11:59:21.761941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e0080 with addr=10.0.0.2, port=8010 00:25:37.215 [2024-10-11 11:59:21.761951] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:37.215 [2024-10-11 11:59:21.761956] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:37.215 [2024-10-11 11:59:21.761961] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:38.155 [2024-10-11 11:59:22.764015] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:38.155 request: 00:25:38.155 { 00:25:38.155 "name": "nvme_second", 00:25:38.155 "trtype": "tcp", 00:25:38.155 "traddr": "10.0.0.2", 00:25:38.155 "adrfam": "ipv4", 00:25:38.155 "trsvcid": "8010", 00:25:38.155 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:38.155 "wait_for_attach": false, 00:25:38.155 "attach_timeout_ms": 3000, 00:25:38.155 "method": "bdev_nvme_start_discovery", 00:25:38.155 "req_id": 1 00:25:38.155 } 00:25:38.155 Got JSON-RPC error response 00:25:38.155 response: 00:25:38.155 { 00:25:38.155 "code": -110, 00:25:38.155 "message": "Connection timed out" 00:25:38.155 } 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.155 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1137068 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:38.416 rmmod nvme_tcp 00:25:38.416 rmmod nvme_fabrics 00:25:38.416 rmmod nvme_keyring 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1137023 ']' 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1137023 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1137023 ']' 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1137023 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1137023 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1137023' 00:25:38.416 killing process with pid 1137023 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1137023 00:25:38.416 11:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1137023 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.677 11:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.586 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:40.586 00:25:40.586 real 0m20.134s 00:25:40.586 user 0m23.294s 00:25:40.586 sys 0m7.166s 00:25:40.586 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:40.586 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.586 ************************************ 00:25:40.586 END TEST nvmf_host_discovery 00:25:40.586 ************************************ 00:25:40.586 11:59:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:40.586 11:59:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:40.586 11:59:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:40.586 11:59:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.847 ************************************ 00:25:40.847 START TEST nvmf_host_multipath_status 00:25:40.847 ************************************ 00:25:40.847 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:40.847 * Looking for test storage... 00:25:40.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.847 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:40.847 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:40.847 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:40.847 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:40.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.848 --rc genhtml_branch_coverage=1 00:25:40.848 --rc genhtml_function_coverage=1 00:25:40.848 --rc genhtml_legend=1 00:25:40.848 --rc geninfo_all_blocks=1 00:25:40.848 --rc geninfo_unexecuted_blocks=1 00:25:40.848 00:25:40.848 ' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:40.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.848 --rc genhtml_branch_coverage=1 00:25:40.848 --rc genhtml_function_coverage=1 00:25:40.848 --rc genhtml_legend=1 00:25:40.848 --rc geninfo_all_blocks=1 00:25:40.848 --rc geninfo_unexecuted_blocks=1 00:25:40.848 00:25:40.848 ' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:40.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.848 --rc genhtml_branch_coverage=1 00:25:40.848 --rc genhtml_function_coverage=1 00:25:40.848 --rc genhtml_legend=1 00:25:40.848 --rc geninfo_all_blocks=1 00:25:40.848 --rc geninfo_unexecuted_blocks=1 00:25:40.848 00:25:40.848 ' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:40.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.848 --rc genhtml_branch_coverage=1 00:25:40.848 --rc genhtml_function_coverage=1 00:25:40.848 --rc genhtml_legend=1 00:25:40.848 --rc geninfo_all_blocks=1 00:25:40.848 --rc geninfo_unexecuted_blocks=1 00:25:40.848 00:25:40.848 ' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:40.848 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:40.849 11:59:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:48.988 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:48.988 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:48.988 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.988 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:48.989 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:48.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:25:48.989 00:25:48.989 --- 10.0.0.2 ping statistics --- 00:25:48.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.989 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:25:48.989 00:25:48.989 --- 10.0.0.1 ping statistics --- 00:25:48.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.989 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1143241 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1143241 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1143241 ']' 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.989 11:59:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:48.989 [2024-10-11 11:59:33.005979] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:25:48.989 [2024-10-11 11:59:33.006046] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.989 [2024-10-11 11:59:33.096635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:48.989 [2024-10-11 11:59:33.148228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.989 [2024-10-11 11:59:33.148278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.989 [2024-10-11 11:59:33.148287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.989 [2024-10-11 11:59:33.148294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.989 [2024-10-11 11:59:33.148301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.989 [2024-10-11 11:59:33.150090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.989 [2024-10-11 11:59:33.150093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.250 11:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.250 11:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:49.250 11:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:49.250 11:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:49.250 11:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:49.250 11:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.250 11:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1143241 00:25:49.250 11:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:49.510 [2024-10-11 11:59:34.033009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.510 11:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:49.771 Malloc0 00:25:49.771 11:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:50.031 11:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.292 11:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.292 [2024-10-11 11:59:34.858957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.292 11:59:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:50.588 [2024-10-11 11:59:35.055478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1143605 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1143605 /var/tmp/bdevperf.sock 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1143605 ']' 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:50.588 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:51.680 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.680 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:51.680 11:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:51.680 11:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:52.250 Nvme0n1 00:25:52.250 11:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:52.510 Nvme0n1 00:25:52.510 11:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:52.510 11:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:54.423 11:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:54.423 11:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:54.683 11:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:54.944 11:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:55.885 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:55.885 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.885 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.885 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.145 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.145 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:56.145 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.145 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.145 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.145 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.145 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.145 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.405 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.405 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.405 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.405 11:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.665 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.665 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.665 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.665 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.665 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.665 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.665 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.665 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.925 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.925 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:56.925 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:57.186 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:57.186 11:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:58.569 11:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:58.569 11:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:58.569 11:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.569 11:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.569 11:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.569 11:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:58.569 11:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.569 11:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.569 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.569 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.569 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.569 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.829 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.829 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.829 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.830 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.090 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.090 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.090 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.090 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.090 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.090 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:59.349 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.349 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.349 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.349 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:59.349 11:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.609 11:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:59.868 11:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:00.808 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:00.808 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.808 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.808 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:01.067 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.067 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:01.068 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.068 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:01.068 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.068 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:01.068 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.068 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.327 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.327 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.327 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.327 11:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.587 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.587 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.587 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.587 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.587 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.587 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.847 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.847 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.847 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.847 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:01.847 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:02.107 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:02.366 11:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:03.306 11:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:03.306 11:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:03.306 11:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.306 11:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.565 11:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.565 11:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:03.565 11:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.565 11:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.565 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.565 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.565 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.565 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.825 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.825 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.825 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.825 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:04.086 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.086 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:04.086 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.086 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.086 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.086 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:04.086 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.086 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.345 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.345 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:04.345 11:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:04.606 11:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:04.606 11:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.991 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.251 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.251 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.251 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.251 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.511 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.511 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:06.511 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.511 11:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.771 11:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.771 11:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:06.771 11:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.771 11:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.771 11:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.771 11:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:06.771 11:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:07.030 11:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.291 11:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:08.230 11:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:08.230 11:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:08.230 11:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.230 11:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.490 11:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.490 11:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:08.490 11:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.490 11:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.490 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.490 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.490 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.490 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.750 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.750 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.750 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.750 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:09.010 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.010 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:09.010 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.010 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.010 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.010 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.010 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.010 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.270 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.270 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:09.530 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:09.530 11:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:09.530 11:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.789 11:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:10.729 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:10.729 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.729 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.729 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.989 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.989 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.989 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.989 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.249 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.249 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.249 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.249 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.507 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.507 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.507 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.507 11:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.507 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.507 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.507 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.507 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.766 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.766 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.766 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.766 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.026 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.026 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:12.026 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:12.026 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:12.285 11:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:13.225 11:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:13.225 11:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:13.225 11:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.225 11:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.485 11:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.485 11:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:13.485 11:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.485 11:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.745 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.745 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.745 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.745 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.745 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.745 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.745 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.745 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.005 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.005 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.005 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.005 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.267 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.267 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.267 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.267 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.527 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.527 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:14.527 11:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.527 11:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:14.788 11:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:15.731 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:15.731 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:15.731 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.731 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.991 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.991 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:15.991 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.991 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.251 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.251 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.251 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.251 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.251 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.251 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.251 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.251 12:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.511 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.511 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.511 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.511 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.770 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.770 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:16.770 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.770 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.030 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.030 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:17.030 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:17.030 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:17.291 12:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:18.234 12:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:18.234 12:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:18.234 12:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.234 12:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.494 12:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.494 12:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:18.494 12:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.494 12:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.754 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.754 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.754 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.754 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.754 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.754 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.754 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.754 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.014 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.014 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.014 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.014 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1143605 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1143605 ']' 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1143605 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.275 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1143605 00:26:19.539 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:19.539 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:19.539 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1143605' 00:26:19.539 killing process with pid 1143605 00:26:19.539 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1143605 00:26:19.539 12:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1143605 00:26:19.539 { 00:26:19.539 "results": [ 00:26:19.539 { 00:26:19.539 "job": "Nvme0n1", 00:26:19.539 "core_mask": "0x4", 00:26:19.539 "workload": "verify", 00:26:19.539 "status": "terminated", 00:26:19.539 "verify_range": { 00:26:19.539 "start": 0, 00:26:19.539 "length": 16384 00:26:19.539 }, 00:26:19.539 "queue_depth": 128, 00:26:19.539 "io_size": 4096, 00:26:19.539 "runtime": 26.849224, 00:26:19.539 "iops": 11957.291577588983, 00:26:19.539 "mibps": 46.708170224956966, 00:26:19.539 "io_failed": 0, 00:26:19.539 "io_timeout": 0, 00:26:19.539 "avg_latency_us": 10685.588029055207, 00:26:19.539 "min_latency_us": 583.68, 00:26:19.539 "max_latency_us": 3019898.88 00:26:19.539 } 00:26:19.539 ], 00:26:19.539 "core_count": 1 00:26:19.539 } 00:26:19.539 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1143605 00:26:19.539 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.539 [2024-10-11 11:59:35.132948] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:26:19.539 [2024-10-11 11:59:35.133028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1143605 ] 00:26:19.539 [2024-10-11 11:59:35.217519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.539 [2024-10-11 11:59:35.268412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.539 Running I/O for 90 seconds... 00:26:19.539 10927.00 IOPS, 42.68 MiB/s [2024-10-11T10:00:04.171Z] 11060.00 IOPS, 43.20 MiB/s [2024-10-11T10:00:04.171Z] 11055.33 IOPS, 43.18 MiB/s [2024-10-11T10:00:04.171Z] 11420.50 IOPS, 44.61 MiB/s [2024-10-11T10:00:04.171Z] 11738.60 IOPS, 45.85 MiB/s [2024-10-11T10:00:04.171Z] 11950.50 IOPS, 46.68 MiB/s [2024-10-11T10:00:04.171Z] 12091.43 IOPS, 47.23 MiB/s [2024-10-11T10:00:04.171Z] 12219.62 IOPS, 47.73 MiB/s [2024-10-11T10:00:04.171Z] 12303.00 IOPS, 48.06 MiB/s [2024-10-11T10:00:04.171Z] 12358.60 IOPS, 48.28 MiB/s [2024-10-11T10:00:04.171Z] 12416.73 IOPS, 48.50 MiB/s [2024-10-11T10:00:04.171Z] [2024-10-11 11:59:49.032220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.032988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.032999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.033004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.033015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.033020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.033031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.033036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.033047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.033052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.033063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.033069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:19.539 [2024-10-11 11:59:49.033079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.539 [2024-10-11 11:59:49.033085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.033274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.033279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.540 [2024-10-11 11:59:49.034111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:19.540 [2024-10-11 11:59:49.034647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.540 [2024-10-11 11:59:49.034652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.034989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.034994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 11:59:49.035222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 11:59:49.035227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:19.541 12438.58 IOPS, 48.59 MiB/s [2024-10-11T10:00:04.173Z] 11481.77 IOPS, 44.85 MiB/s [2024-10-11T10:00:04.173Z] 10661.64 IOPS, 41.65 MiB/s [2024-10-11T10:00:04.173Z] 9981.87 IOPS, 38.99 MiB/s [2024-10-11T10:00:04.173Z] 10168.31 IOPS, 39.72 MiB/s [2024-10-11T10:00:04.173Z] 10332.71 IOPS, 40.36 MiB/s [2024-10-11T10:00:04.173Z] 10668.44 IOPS, 41.67 MiB/s [2024-10-11T10:00:04.173Z] 10990.84 IOPS, 42.93 MiB/s [2024-10-11T10:00:04.173Z] 11194.25 IOPS, 43.73 MiB/s [2024-10-11T10:00:04.173Z] 11272.95 IOPS, 44.03 MiB/s [2024-10-11T10:00:04.173Z] 11348.27 IOPS, 44.33 MiB/s [2024-10-11T10:00:04.173Z] 11558.00 IOPS, 45.15 MiB/s [2024-10-11T10:00:04.173Z] 11767.92 IOPS, 45.97 MiB/s [2024-10-11T10:00:04.173Z] [2024-10-11 12:00:01.732470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.541 [2024-10-11 12:00:01.732740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:19.541 [2024-10-11 12:00:01.732751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.542 [2024-10-11 12:00:01.732917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.732933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.732949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.732960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.732966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.733992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.733997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.734008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.542 [2024-10-11 12:00:01.734013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:19.542 [2024-10-11 12:00:01.734023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.543 [2024-10-11 12:00:01.734028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:19.543 [2024-10-11 12:00:01.734039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.543 [2024-10-11 12:00:01.734044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:19.543 [2024-10-11 12:00:01.734054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.543 [2024-10-11 12:00:01.734059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:19.543 [2024-10-11 12:00:01.734070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.543 [2024-10-11 12:00:01.734075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:19.543 [2024-10-11 12:00:01.734085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.543 [2024-10-11 12:00:01.734091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:19.543 [2024-10-11 12:00:01.734312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.543 [2024-10-11 12:00:01.734319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:19.543 [2024-10-11 12:00:01.734332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:19.543 [2024-10-11 12:00:01.734337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:19.543 11904.76 IOPS, 46.50 MiB/s [2024-10-11T10:00:04.175Z] 11936.04 IOPS, 46.63 MiB/s [2024-10-11T10:00:04.175Z] Received shutdown signal, test time was about 26.849838 seconds 00:26:19.543 00:26:19.543 Latency(us) 00:26:19.543 [2024-10-11T10:00:04.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.543 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:19.543 Verification LBA range: start 0x0 length 0x4000 00:26:19.543 Nvme0n1 : 26.85 11957.29 46.71 0.00 0.00 10685.59 583.68 3019898.88 00:26:19.543 [2024-10-11T10:00:04.175Z] =================================================================================================================== 00:26:19.543 [2024-10-11T10:00:04.175Z] Total : 11957.29 46.71 0.00 0.00 10685.59 583.68 3019898.88 00:26:19.543 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.804 rmmod nvme_tcp 00:26:19.804 rmmod nvme_fabrics 00:26:19.804 rmmod nvme_keyring 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1143241 ']' 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1143241 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1143241 ']' 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1143241 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1143241 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1143241' 00:26:19.804 killing process with pid 1143241 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1143241 00:26:19.804 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1143241 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.064 12:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.976 12:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:21.976 00:26:21.976 real 0m41.336s 00:26:21.976 user 1m47.144s 00:26:21.976 sys 0m11.441s 00:26:21.976 12:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:21.976 12:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:21.976 ************************************ 00:26:21.976 END TEST nvmf_host_multipath_status 00:26:21.976 ************************************ 00:26:21.976 12:00:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:21.976 12:00:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:21.976 12:00:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:21.976 12:00:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.237 ************************************ 00:26:22.237 START TEST nvmf_discovery_remove_ifc 00:26:22.237 ************************************ 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:22.237 * Looking for test storage... 00:26:22.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:22.237 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:22.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.238 --rc genhtml_branch_coverage=1 00:26:22.238 --rc genhtml_function_coverage=1 00:26:22.238 --rc genhtml_legend=1 00:26:22.238 --rc geninfo_all_blocks=1 00:26:22.238 --rc geninfo_unexecuted_blocks=1 00:26:22.238 00:26:22.238 ' 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:22.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.238 --rc genhtml_branch_coverage=1 00:26:22.238 --rc genhtml_function_coverage=1 00:26:22.238 --rc genhtml_legend=1 00:26:22.238 --rc geninfo_all_blocks=1 00:26:22.238 --rc geninfo_unexecuted_blocks=1 00:26:22.238 00:26:22.238 ' 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:22.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.238 --rc genhtml_branch_coverage=1 00:26:22.238 --rc genhtml_function_coverage=1 00:26:22.238 --rc genhtml_legend=1 00:26:22.238 --rc geninfo_all_blocks=1 00:26:22.238 --rc geninfo_unexecuted_blocks=1 00:26:22.238 00:26:22.238 ' 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:22.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.238 --rc genhtml_branch_coverage=1 00:26:22.238 --rc genhtml_function_coverage=1 00:26:22.238 --rc genhtml_legend=1 00:26:22.238 --rc geninfo_all_blocks=1 00:26:22.238 --rc geninfo_unexecuted_blocks=1 00:26:22.238 00:26:22.238 ' 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.238 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:22.500 12:00:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:30.641 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:30.641 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:30.641 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.641 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:30.642 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:30.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:26:30.642 00:26:30.642 --- 10.0.0.2 ping statistics --- 00:26:30.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.642 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:26:30.642 00:26:30.642 --- 10.0.0.1 ping statistics --- 00:26:30.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.642 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1154196 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1154196 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1154196 ']' 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.642 12:00:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.642 [2024-10-11 12:00:14.463827] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:26:30.642 [2024-10-11 12:00:14.463905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.642 [2024-10-11 12:00:14.551495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.642 [2024-10-11 12:00:14.603241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.642 [2024-10-11 12:00:14.603291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.642 [2024-10-11 12:00:14.603299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.642 [2024-10-11 12:00:14.603306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.642 [2024-10-11 12:00:14.603313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.642 [2024-10-11 12:00:14.604057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.904 [2024-10-11 12:00:15.349964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.904 [2024-10-11 12:00:15.358200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:30.904 null0 00:26:30.904 [2024-10-11 12:00:15.390148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1154434 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1154434 /tmp/host.sock 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1154434 ']' 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:30.904 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.904 12:00:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.904 [2024-10-11 12:00:15.468925] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:26:30.904 [2024-10-11 12:00:15.468990] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154434 ] 00:26:31.165 [2024-10-11 12:00:15.549202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.165 [2024-10-11 12:00:15.602040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.738 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.999 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.999 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:31.999 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.999 12:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.940 [2024-10-11 12:00:17.438906] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:32.940 [2024-10-11 12:00:17.438946] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:32.940 [2024-10-11 12:00:17.438971] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:32.940 [2024-10-11 12:00:17.526227] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:33.201 [2024-10-11 12:00:17.590233] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:33.201 [2024-10-11 12:00:17.590315] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:33.201 [2024-10-11 12:00:17.590341] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:33.201 [2024-10-11 12:00:17.590361] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:33.201 [2024-10-11 12:00:17.590387] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.201 [2024-10-11 12:00:17.598282] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15be890 was disconnected and freed. delete nvme_qpair. 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.201 12:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.582 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.583 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.583 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.583 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.583 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.583 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.583 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.583 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.583 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.583 12:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.526 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.526 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.526 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.526 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.527 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.527 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.527 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:35.527 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.527 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:35.527 12:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:36.468 12:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:37.408 12:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:37.408 12:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:37.408 12:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:37.408 12:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.408 12:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:37.408 12:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.408 12:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.408 12:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.668 12:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.668 12:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.609 [2024-10-11 12:00:23.030821] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:38.609 [2024-10-11 12:00:23.030859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.609 [2024-10-11 12:00:23.030869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.609 [2024-10-11 12:00:23.030877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.609 [2024-10-11 12:00:23.030882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.609 [2024-10-11 12:00:23.030888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.609 [2024-10-11 12:00:23.030894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.609 [2024-10-11 12:00:23.030904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.609 [2024-10-11 12:00:23.030909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.609 [2024-10-11 12:00:23.030915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.609 [2024-10-11 12:00:23.030920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.609 [2024-10-11 12:00:23.030925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159b100 is same with the state(6) to be set 00:26:38.609 [2024-10-11 12:00:23.040843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159b100 (9): Bad file descriptor 00:26:38.609 [2024-10-11 12:00:23.050878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:38.609 12:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.609 12:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.609 12:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.609 12:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.609 12:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.609 12:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.609 12:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.552 [2024-10-11 12:00:24.107731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:39.552 [2024-10-11 12:00:24.107819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159b100 with addr=10.0.0.2, port=4420 00:26:39.552 [2024-10-11 12:00:24.107851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159b100 is same with the state(6) to be set 00:26:39.552 [2024-10-11 12:00:24.107903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159b100 (9): Bad file descriptor 00:26:39.552 [2024-10-11 12:00:24.108999] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:39.552 [2024-10-11 12:00:24.109068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:39.552 [2024-10-11 12:00:24.109091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:39.552 [2024-10-11 12:00:24.109114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:39.552 [2024-10-11 12:00:24.109177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:39.552 [2024-10-11 12:00:24.109202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:39.552 12:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.552 12:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.552 12:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.495 [2024-10-11 12:00:25.111600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.495 [2024-10-11 12:00:25.111616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.495 [2024-10-11 12:00:25.111622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.495 [2024-10-11 12:00:25.111627] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:40.495 [2024-10-11 12:00:25.111641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.495 [2024-10-11 12:00:25.111657] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:40.495 [2024-10-11 12:00:25.111677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.495 [2024-10-11 12:00:25.111684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.495 [2024-10-11 12:00:25.111691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.495 [2024-10-11 12:00:25.111696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.495 [2024-10-11 12:00:25.111702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.495 [2024-10-11 12:00:25.111707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.495 [2024-10-11 12:00:25.111713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.495 [2024-10-11 12:00:25.111718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.495 [2024-10-11 12:00:25.111723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.495 [2024-10-11 12:00:25.111728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.495 [2024-10-11 12:00:25.111733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:40.495 [2024-10-11 12:00:25.112161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158a840 (9): Bad file descriptor 00:26:40.495 [2024-10-11 12:00:25.113170] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:40.495 [2024-10-11 12:00:25.113179] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:40.757 12:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:42.142 12:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.714 [2024-10-11 12:00:27.123586] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:42.714 [2024-10-11 12:00:27.123600] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:42.714 [2024-10-11 12:00:27.123610] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:42.714 [2024-10-11 12:00:27.253994] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:42.975 [2024-10-11 12:00:27.354237] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:42.975 [2024-10-11 12:00:27.354267] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:42.975 [2024-10-11 12:00:27.354281] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:42.975 [2024-10-11 12:00:27.354292] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:42.975 [2024-10-11 12:00:27.354298] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:42.975 [2024-10-11 12:00:27.361935] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15968d0 was disconnected and freed. delete nvme_qpair. 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1154434 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1154434 ']' 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1154434 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1154434 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1154434' 00:26:42.975 killing process with pid 1154434 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1154434 00:26:42.975 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1154434 00:26:43.235 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:43.235 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:43.236 rmmod nvme_tcp 00:26:43.236 rmmod nvme_fabrics 00:26:43.236 rmmod nvme_keyring 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1154196 ']' 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1154196 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1154196 ']' 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1154196 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1154196 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1154196' 00:26:43.236 killing process with pid 1154196 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1154196 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1154196 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.236 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.497 12:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.409 12:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:45.409 00:26:45.409 real 0m23.307s 00:26:45.409 user 0m27.153s 00:26:45.409 sys 0m7.149s 00:26:45.409 12:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.409 12:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.409 ************************************ 00:26:45.409 END TEST nvmf_discovery_remove_ifc 00:26:45.410 ************************************ 00:26:45.410 12:00:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:45.410 12:00:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:45.410 12:00:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.410 12:00:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.410 ************************************ 00:26:45.410 START TEST nvmf_identify_kernel_target 00:26:45.410 ************************************ 00:26:45.410 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:45.671 * Looking for test storage... 00:26:45.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:45.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.671 --rc genhtml_branch_coverage=1 00:26:45.671 --rc genhtml_function_coverage=1 00:26:45.671 --rc genhtml_legend=1 00:26:45.671 --rc geninfo_all_blocks=1 00:26:45.671 --rc geninfo_unexecuted_blocks=1 00:26:45.671 00:26:45.671 ' 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:45.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.671 --rc genhtml_branch_coverage=1 00:26:45.671 --rc genhtml_function_coverage=1 00:26:45.671 --rc genhtml_legend=1 00:26:45.671 --rc geninfo_all_blocks=1 00:26:45.671 --rc geninfo_unexecuted_blocks=1 00:26:45.671 00:26:45.671 ' 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:45.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.671 --rc genhtml_branch_coverage=1 00:26:45.671 --rc genhtml_function_coverage=1 00:26:45.671 --rc genhtml_legend=1 00:26:45.671 --rc geninfo_all_blocks=1 00:26:45.671 --rc geninfo_unexecuted_blocks=1 00:26:45.671 00:26:45.671 ' 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:45.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.671 --rc genhtml_branch_coverage=1 00:26:45.671 --rc genhtml_function_coverage=1 00:26:45.671 --rc genhtml_legend=1 00:26:45.671 --rc geninfo_all_blocks=1 00:26:45.671 --rc geninfo_unexecuted_blocks=1 00:26:45.671 00:26:45.671 ' 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.671 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.672 12:00:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:53.815 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:53.816 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:53.816 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:53.816 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:53.816 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:26:53.816 00:26:53.816 --- 10.0.0.2 ping statistics --- 00:26:53.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.816 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:26:53.816 00:26:53.816 --- 10.0.0.1 ping statistics --- 00:26:53.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.816 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.816 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:53.817 12:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:57.119 Waiting for block devices as requested 00:26:57.119 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:57.119 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:57.119 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:57.119 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:57.119 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:57.119 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:57.119 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:57.119 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:57.380 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:57.380 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:57.640 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:57.640 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:57.640 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:57.901 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:57.901 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:57.901 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:58.162 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:58.162 No valid GPT data, bailing 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:58.162 00:26:58.162 Discovery Log Number of Records 2, Generation counter 2 00:26:58.162 =====Discovery Log Entry 0====== 00:26:58.162 trtype: tcp 00:26:58.162 adrfam: ipv4 00:26:58.162 subtype: current discovery subsystem 00:26:58.162 treq: not specified, sq flow control disable supported 00:26:58.162 portid: 1 00:26:58.162 trsvcid: 4420 00:26:58.162 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:58.162 traddr: 10.0.0.1 00:26:58.162 eflags: none 00:26:58.162 sectype: none 00:26:58.162 =====Discovery Log Entry 1====== 00:26:58.162 trtype: tcp 00:26:58.162 adrfam: ipv4 00:26:58.162 subtype: nvme subsystem 00:26:58.162 treq: not specified, sq flow control disable supported 00:26:58.162 portid: 1 00:26:58.162 trsvcid: 4420 00:26:58.162 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:58.162 traddr: 10.0.0.1 00:26:58.162 eflags: none 00:26:58.162 sectype: none 00:26:58.162 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:58.162 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:58.162 ===================================================== 00:26:58.162 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:58.162 ===================================================== 00:26:58.162 Controller Capabilities/Features 00:26:58.162 ================================ 00:26:58.162 Vendor ID: 0000 00:26:58.162 Subsystem Vendor ID: 0000 00:26:58.162 Serial Number: e0b9da62f31b10ed9a9b 00:26:58.162 Model Number: Linux 00:26:58.162 Firmware Version: 6.8.9-20 00:26:58.162 Recommended Arb Burst: 0 00:26:58.162 IEEE OUI Identifier: 00 00 00 00:26:58.162 Multi-path I/O 00:26:58.162 May have multiple subsystem ports: No 00:26:58.162 May have multiple controllers: No 00:26:58.162 Associated with SR-IOV VF: No 00:26:58.162 Max Data Transfer Size: Unlimited 00:26:58.162 Max Number of Namespaces: 0 00:26:58.162 Max Number of I/O Queues: 1024 00:26:58.162 NVMe Specification Version (VS): 1.3 00:26:58.162 NVMe Specification Version (Identify): 1.3 00:26:58.162 Maximum Queue Entries: 1024 00:26:58.162 Contiguous Queues Required: No 00:26:58.162 Arbitration Mechanisms Supported 00:26:58.162 Weighted Round Robin: Not Supported 00:26:58.162 Vendor Specific: Not Supported 00:26:58.162 Reset Timeout: 7500 ms 00:26:58.162 Doorbell Stride: 4 bytes 00:26:58.162 NVM Subsystem Reset: Not Supported 00:26:58.162 Command Sets Supported 00:26:58.163 NVM Command Set: Supported 00:26:58.163 Boot Partition: Not Supported 00:26:58.163 Memory Page Size Minimum: 4096 bytes 00:26:58.163 Memory Page Size Maximum: 4096 bytes 00:26:58.163 Persistent Memory Region: Not Supported 00:26:58.163 Optional Asynchronous Events Supported 00:26:58.163 Namespace Attribute Notices: Not Supported 00:26:58.163 Firmware Activation Notices: Not Supported 00:26:58.163 ANA Change Notices: Not Supported 00:26:58.163 PLE Aggregate Log Change Notices: Not Supported 00:26:58.163 LBA Status Info Alert Notices: Not Supported 00:26:58.163 EGE Aggregate Log Change Notices: Not Supported 00:26:58.163 Normal NVM Subsystem Shutdown event: Not Supported 00:26:58.163 Zone Descriptor Change Notices: Not Supported 00:26:58.163 Discovery Log Change Notices: Supported 00:26:58.163 Controller Attributes 00:26:58.163 128-bit Host Identifier: Not Supported 00:26:58.163 Non-Operational Permissive Mode: Not Supported 00:26:58.163 NVM Sets: Not Supported 00:26:58.163 Read Recovery Levels: Not Supported 00:26:58.163 Endurance Groups: Not Supported 00:26:58.163 Predictable Latency Mode: Not Supported 00:26:58.163 Traffic Based Keep ALive: Not Supported 00:26:58.163 Namespace Granularity: Not Supported 00:26:58.163 SQ Associations: Not Supported 00:26:58.163 UUID List: Not Supported 00:26:58.163 Multi-Domain Subsystem: Not Supported 00:26:58.163 Fixed Capacity Management: Not Supported 00:26:58.163 Variable Capacity Management: Not Supported 00:26:58.163 Delete Endurance Group: Not Supported 00:26:58.163 Delete NVM Set: Not Supported 00:26:58.163 Extended LBA Formats Supported: Not Supported 00:26:58.163 Flexible Data Placement Supported: Not Supported 00:26:58.163 00:26:58.163 Controller Memory Buffer Support 00:26:58.163 ================================ 00:26:58.163 Supported: No 00:26:58.163 00:26:58.163 Persistent Memory Region Support 00:26:58.163 ================================ 00:26:58.163 Supported: No 00:26:58.163 00:26:58.163 Admin Command Set Attributes 00:26:58.163 ============================ 00:26:58.163 Security Send/Receive: Not Supported 00:26:58.163 Format NVM: Not Supported 00:26:58.163 Firmware Activate/Download: Not Supported 00:26:58.163 Namespace Management: Not Supported 00:26:58.163 Device Self-Test: Not Supported 00:26:58.163 Directives: Not Supported 00:26:58.163 NVMe-MI: Not Supported 00:26:58.163 Virtualization Management: Not Supported 00:26:58.163 Doorbell Buffer Config: Not Supported 00:26:58.163 Get LBA Status Capability: Not Supported 00:26:58.163 Command & Feature Lockdown Capability: Not Supported 00:26:58.163 Abort Command Limit: 1 00:26:58.163 Async Event Request Limit: 1 00:26:58.163 Number of Firmware Slots: N/A 00:26:58.163 Firmware Slot 1 Read-Only: N/A 00:26:58.163 Firmware Activation Without Reset: N/A 00:26:58.163 Multiple Update Detection Support: N/A 00:26:58.163 Firmware Update Granularity: No Information Provided 00:26:58.163 Per-Namespace SMART Log: No 00:26:58.163 Asymmetric Namespace Access Log Page: Not Supported 00:26:58.163 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:58.163 Command Effects Log Page: Not Supported 00:26:58.163 Get Log Page Extended Data: Supported 00:26:58.163 Telemetry Log Pages: Not Supported 00:26:58.163 Persistent Event Log Pages: Not Supported 00:26:58.163 Supported Log Pages Log Page: May Support 00:26:58.163 Commands Supported & Effects Log Page: Not Supported 00:26:58.163 Feature Identifiers & Effects Log Page:May Support 00:26:58.163 NVMe-MI Commands & Effects Log Page: May Support 00:26:58.163 Data Area 4 for Telemetry Log: Not Supported 00:26:58.163 Error Log Page Entries Supported: 1 00:26:58.163 Keep Alive: Not Supported 00:26:58.163 00:26:58.163 NVM Command Set Attributes 00:26:58.163 ========================== 00:26:58.163 Submission Queue Entry Size 00:26:58.163 Max: 1 00:26:58.163 Min: 1 00:26:58.163 Completion Queue Entry Size 00:26:58.163 Max: 1 00:26:58.163 Min: 1 00:26:58.163 Number of Namespaces: 0 00:26:58.163 Compare Command: Not Supported 00:26:58.163 Write Uncorrectable Command: Not Supported 00:26:58.163 Dataset Management Command: Not Supported 00:26:58.163 Write Zeroes Command: Not Supported 00:26:58.163 Set Features Save Field: Not Supported 00:26:58.163 Reservations: Not Supported 00:26:58.163 Timestamp: Not Supported 00:26:58.163 Copy: Not Supported 00:26:58.163 Volatile Write Cache: Not Present 00:26:58.163 Atomic Write Unit (Normal): 1 00:26:58.163 Atomic Write Unit (PFail): 1 00:26:58.163 Atomic Compare & Write Unit: 1 00:26:58.163 Fused Compare & Write: Not Supported 00:26:58.163 Scatter-Gather List 00:26:58.163 SGL Command Set: Supported 00:26:58.163 SGL Keyed: Not Supported 00:26:58.163 SGL Bit Bucket Descriptor: Not Supported 00:26:58.163 SGL Metadata Pointer: Not Supported 00:26:58.163 Oversized SGL: Not Supported 00:26:58.163 SGL Metadata Address: Not Supported 00:26:58.163 SGL Offset: Supported 00:26:58.163 Transport SGL Data Block: Not Supported 00:26:58.163 Replay Protected Memory Block: Not Supported 00:26:58.163 00:26:58.163 Firmware Slot Information 00:26:58.163 ========================= 00:26:58.163 Active slot: 0 00:26:58.163 00:26:58.163 00:26:58.163 Error Log 00:26:58.163 ========= 00:26:58.163 00:26:58.163 Active Namespaces 00:26:58.163 ================= 00:26:58.163 Discovery Log Page 00:26:58.163 ================== 00:26:58.163 Generation Counter: 2 00:26:58.163 Number of Records: 2 00:26:58.163 Record Format: 0 00:26:58.163 00:26:58.163 Discovery Log Entry 0 00:26:58.163 ---------------------- 00:26:58.163 Transport Type: 3 (TCP) 00:26:58.163 Address Family: 1 (IPv4) 00:26:58.163 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:58.163 Entry Flags: 00:26:58.163 Duplicate Returned Information: 0 00:26:58.163 Explicit Persistent Connection Support for Discovery: 0 00:26:58.163 Transport Requirements: 00:26:58.163 Secure Channel: Not Specified 00:26:58.163 Port ID: 1 (0x0001) 00:26:58.163 Controller ID: 65535 (0xffff) 00:26:58.163 Admin Max SQ Size: 32 00:26:58.163 Transport Service Identifier: 4420 00:26:58.163 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:58.163 Transport Address: 10.0.0.1 00:26:58.163 Discovery Log Entry 1 00:26:58.163 ---------------------- 00:26:58.163 Transport Type: 3 (TCP) 00:26:58.163 Address Family: 1 (IPv4) 00:26:58.163 Subsystem Type: 2 (NVM Subsystem) 00:26:58.163 Entry Flags: 00:26:58.163 Duplicate Returned Information: 0 00:26:58.163 Explicit Persistent Connection Support for Discovery: 0 00:26:58.163 Transport Requirements: 00:26:58.163 Secure Channel: Not Specified 00:26:58.163 Port ID: 1 (0x0001) 00:26:58.163 Controller ID: 65535 (0xffff) 00:26:58.163 Admin Max SQ Size: 32 00:26:58.163 Transport Service Identifier: 4420 00:26:58.163 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:58.163 Transport Address: 10.0.0.1 00:26:58.426 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:58.426 get_feature(0x01) failed 00:26:58.426 get_feature(0x02) failed 00:26:58.426 get_feature(0x04) failed 00:26:58.426 ===================================================== 00:26:58.426 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:58.426 ===================================================== 00:26:58.426 Controller Capabilities/Features 00:26:58.426 ================================ 00:26:58.426 Vendor ID: 0000 00:26:58.426 Subsystem Vendor ID: 0000 00:26:58.426 Serial Number: 7f3948ecc9b45b2e1514 00:26:58.426 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:58.426 Firmware Version: 6.8.9-20 00:26:58.426 Recommended Arb Burst: 6 00:26:58.426 IEEE OUI Identifier: 00 00 00 00:26:58.426 Multi-path I/O 00:26:58.426 May have multiple subsystem ports: Yes 00:26:58.426 May have multiple controllers: Yes 00:26:58.426 Associated with SR-IOV VF: No 00:26:58.426 Max Data Transfer Size: Unlimited 00:26:58.426 Max Number of Namespaces: 1024 00:26:58.426 Max Number of I/O Queues: 128 00:26:58.426 NVMe Specification Version (VS): 1.3 00:26:58.426 NVMe Specification Version (Identify): 1.3 00:26:58.426 Maximum Queue Entries: 1024 00:26:58.426 Contiguous Queues Required: No 00:26:58.426 Arbitration Mechanisms Supported 00:26:58.426 Weighted Round Robin: Not Supported 00:26:58.426 Vendor Specific: Not Supported 00:26:58.426 Reset Timeout: 7500 ms 00:26:58.426 Doorbell Stride: 4 bytes 00:26:58.426 NVM Subsystem Reset: Not Supported 00:26:58.426 Command Sets Supported 00:26:58.426 NVM Command Set: Supported 00:26:58.426 Boot Partition: Not Supported 00:26:58.426 Memory Page Size Minimum: 4096 bytes 00:26:58.426 Memory Page Size Maximum: 4096 bytes 00:26:58.426 Persistent Memory Region: Not Supported 00:26:58.426 Optional Asynchronous Events Supported 00:26:58.426 Namespace Attribute Notices: Supported 00:26:58.426 Firmware Activation Notices: Not Supported 00:26:58.426 ANA Change Notices: Supported 00:26:58.426 PLE Aggregate Log Change Notices: Not Supported 00:26:58.426 LBA Status Info Alert Notices: Not Supported 00:26:58.426 EGE Aggregate Log Change Notices: Not Supported 00:26:58.426 Normal NVM Subsystem Shutdown event: Not Supported 00:26:58.426 Zone Descriptor Change Notices: Not Supported 00:26:58.426 Discovery Log Change Notices: Not Supported 00:26:58.426 Controller Attributes 00:26:58.426 128-bit Host Identifier: Supported 00:26:58.426 Non-Operational Permissive Mode: Not Supported 00:26:58.426 NVM Sets: Not Supported 00:26:58.426 Read Recovery Levels: Not Supported 00:26:58.427 Endurance Groups: Not Supported 00:26:58.427 Predictable Latency Mode: Not Supported 00:26:58.427 Traffic Based Keep ALive: Supported 00:26:58.427 Namespace Granularity: Not Supported 00:26:58.427 SQ Associations: Not Supported 00:26:58.427 UUID List: Not Supported 00:26:58.427 Multi-Domain Subsystem: Not Supported 00:26:58.427 Fixed Capacity Management: Not Supported 00:26:58.427 Variable Capacity Management: Not Supported 00:26:58.427 Delete Endurance Group: Not Supported 00:26:58.427 Delete NVM Set: Not Supported 00:26:58.427 Extended LBA Formats Supported: Not Supported 00:26:58.427 Flexible Data Placement Supported: Not Supported 00:26:58.427 00:26:58.427 Controller Memory Buffer Support 00:26:58.427 ================================ 00:26:58.427 Supported: No 00:26:58.427 00:26:58.427 Persistent Memory Region Support 00:26:58.427 ================================ 00:26:58.427 Supported: No 00:26:58.427 00:26:58.427 Admin Command Set Attributes 00:26:58.427 ============================ 00:26:58.427 Security Send/Receive: Not Supported 00:26:58.427 Format NVM: Not Supported 00:26:58.427 Firmware Activate/Download: Not Supported 00:26:58.427 Namespace Management: Not Supported 00:26:58.427 Device Self-Test: Not Supported 00:26:58.427 Directives: Not Supported 00:26:58.427 NVMe-MI: Not Supported 00:26:58.427 Virtualization Management: Not Supported 00:26:58.427 Doorbell Buffer Config: Not Supported 00:26:58.427 Get LBA Status Capability: Not Supported 00:26:58.427 Command & Feature Lockdown Capability: Not Supported 00:26:58.427 Abort Command Limit: 4 00:26:58.427 Async Event Request Limit: 4 00:26:58.427 Number of Firmware Slots: N/A 00:26:58.427 Firmware Slot 1 Read-Only: N/A 00:26:58.427 Firmware Activation Without Reset: N/A 00:26:58.427 Multiple Update Detection Support: N/A 00:26:58.427 Firmware Update Granularity: No Information Provided 00:26:58.427 Per-Namespace SMART Log: Yes 00:26:58.427 Asymmetric Namespace Access Log Page: Supported 00:26:58.427 ANA Transition Time : 10 sec 00:26:58.427 00:26:58.427 Asymmetric Namespace Access Capabilities 00:26:58.427 ANA Optimized State : Supported 00:26:58.427 ANA Non-Optimized State : Supported 00:26:58.427 ANA Inaccessible State : Supported 00:26:58.427 ANA Persistent Loss State : Supported 00:26:58.427 ANA Change State : Supported 00:26:58.427 ANAGRPID is not changed : No 00:26:58.427 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:58.427 00:26:58.427 ANA Group Identifier Maximum : 128 00:26:58.427 Number of ANA Group Identifiers : 128 00:26:58.427 Max Number of Allowed Namespaces : 1024 00:26:58.427 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:58.427 Command Effects Log Page: Supported 00:26:58.427 Get Log Page Extended Data: Supported 00:26:58.427 Telemetry Log Pages: Not Supported 00:26:58.427 Persistent Event Log Pages: Not Supported 00:26:58.427 Supported Log Pages Log Page: May Support 00:26:58.427 Commands Supported & Effects Log Page: Not Supported 00:26:58.427 Feature Identifiers & Effects Log Page:May Support 00:26:58.427 NVMe-MI Commands & Effects Log Page: May Support 00:26:58.427 Data Area 4 for Telemetry Log: Not Supported 00:26:58.427 Error Log Page Entries Supported: 128 00:26:58.427 Keep Alive: Supported 00:26:58.427 Keep Alive Granularity: 1000 ms 00:26:58.427 00:26:58.427 NVM Command Set Attributes 00:26:58.427 ========================== 00:26:58.427 Submission Queue Entry Size 00:26:58.427 Max: 64 00:26:58.427 Min: 64 00:26:58.427 Completion Queue Entry Size 00:26:58.427 Max: 16 00:26:58.427 Min: 16 00:26:58.427 Number of Namespaces: 1024 00:26:58.427 Compare Command: Not Supported 00:26:58.427 Write Uncorrectable Command: Not Supported 00:26:58.427 Dataset Management Command: Supported 00:26:58.427 Write Zeroes Command: Supported 00:26:58.427 Set Features Save Field: Not Supported 00:26:58.427 Reservations: Not Supported 00:26:58.427 Timestamp: Not Supported 00:26:58.427 Copy: Not Supported 00:26:58.427 Volatile Write Cache: Present 00:26:58.427 Atomic Write Unit (Normal): 1 00:26:58.427 Atomic Write Unit (PFail): 1 00:26:58.427 Atomic Compare & Write Unit: 1 00:26:58.427 Fused Compare & Write: Not Supported 00:26:58.427 Scatter-Gather List 00:26:58.427 SGL Command Set: Supported 00:26:58.427 SGL Keyed: Not Supported 00:26:58.427 SGL Bit Bucket Descriptor: Not Supported 00:26:58.427 SGL Metadata Pointer: Not Supported 00:26:58.427 Oversized SGL: Not Supported 00:26:58.427 SGL Metadata Address: Not Supported 00:26:58.427 SGL Offset: Supported 00:26:58.427 Transport SGL Data Block: Not Supported 00:26:58.427 Replay Protected Memory Block: Not Supported 00:26:58.427 00:26:58.427 Firmware Slot Information 00:26:58.427 ========================= 00:26:58.427 Active slot: 0 00:26:58.427 00:26:58.427 Asymmetric Namespace Access 00:26:58.427 =========================== 00:26:58.427 Change Count : 0 00:26:58.427 Number of ANA Group Descriptors : 1 00:26:58.427 ANA Group Descriptor : 0 00:26:58.427 ANA Group ID : 1 00:26:58.427 Number of NSID Values : 1 00:26:58.427 Change Count : 0 00:26:58.427 ANA State : 1 00:26:58.427 Namespace Identifier : 1 00:26:58.427 00:26:58.427 Commands Supported and Effects 00:26:58.427 ============================== 00:26:58.427 Admin Commands 00:26:58.427 -------------- 00:26:58.427 Get Log Page (02h): Supported 00:26:58.427 Identify (06h): Supported 00:26:58.427 Abort (08h): Supported 00:26:58.427 Set Features (09h): Supported 00:26:58.427 Get Features (0Ah): Supported 00:26:58.427 Asynchronous Event Request (0Ch): Supported 00:26:58.427 Keep Alive (18h): Supported 00:26:58.427 I/O Commands 00:26:58.427 ------------ 00:26:58.427 Flush (00h): Supported 00:26:58.427 Write (01h): Supported LBA-Change 00:26:58.427 Read (02h): Supported 00:26:58.427 Write Zeroes (08h): Supported LBA-Change 00:26:58.427 Dataset Management (09h): Supported 00:26:58.427 00:26:58.427 Error Log 00:26:58.427 ========= 00:26:58.427 Entry: 0 00:26:58.427 Error Count: 0x3 00:26:58.427 Submission Queue Id: 0x0 00:26:58.427 Command Id: 0x5 00:26:58.427 Phase Bit: 0 00:26:58.427 Status Code: 0x2 00:26:58.427 Status Code Type: 0x0 00:26:58.427 Do Not Retry: 1 00:26:58.427 Error Location: 0x28 00:26:58.427 LBA: 0x0 00:26:58.427 Namespace: 0x0 00:26:58.427 Vendor Log Page: 0x0 00:26:58.427 ----------- 00:26:58.427 Entry: 1 00:26:58.427 Error Count: 0x2 00:26:58.427 Submission Queue Id: 0x0 00:26:58.427 Command Id: 0x5 00:26:58.427 Phase Bit: 0 00:26:58.427 Status Code: 0x2 00:26:58.427 Status Code Type: 0x0 00:26:58.427 Do Not Retry: 1 00:26:58.427 Error Location: 0x28 00:26:58.427 LBA: 0x0 00:26:58.427 Namespace: 0x0 00:26:58.427 Vendor Log Page: 0x0 00:26:58.427 ----------- 00:26:58.427 Entry: 2 00:26:58.427 Error Count: 0x1 00:26:58.427 Submission Queue Id: 0x0 00:26:58.427 Command Id: 0x4 00:26:58.427 Phase Bit: 0 00:26:58.427 Status Code: 0x2 00:26:58.427 Status Code Type: 0x0 00:26:58.427 Do Not Retry: 1 00:26:58.427 Error Location: 0x28 00:26:58.427 LBA: 0x0 00:26:58.427 Namespace: 0x0 00:26:58.427 Vendor Log Page: 0x0 00:26:58.427 00:26:58.427 Number of Queues 00:26:58.427 ================ 00:26:58.427 Number of I/O Submission Queues: 128 00:26:58.427 Number of I/O Completion Queues: 128 00:26:58.427 00:26:58.427 ZNS Specific Controller Data 00:26:58.427 ============================ 00:26:58.427 Zone Append Size Limit: 0 00:26:58.427 00:26:58.427 00:26:58.427 Active Namespaces 00:26:58.427 ================= 00:26:58.427 get_feature(0x05) failed 00:26:58.427 Namespace ID:1 00:26:58.427 Command Set Identifier: NVM (00h) 00:26:58.427 Deallocate: Supported 00:26:58.427 Deallocated/Unwritten Error: Not Supported 00:26:58.427 Deallocated Read Value: Unknown 00:26:58.427 Deallocate in Write Zeroes: Not Supported 00:26:58.427 Deallocated Guard Field: 0xFFFF 00:26:58.427 Flush: Supported 00:26:58.427 Reservation: Not Supported 00:26:58.427 Namespace Sharing Capabilities: Multiple Controllers 00:26:58.427 Size (in LBAs): 3750748848 (1788GiB) 00:26:58.427 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:58.427 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:58.427 UUID: e47cf7c0-11f2-452b-a927-d1a8c4304f43 00:26:58.427 Thin Provisioning: Not Supported 00:26:58.427 Per-NS Atomic Units: Yes 00:26:58.427 Atomic Write Unit (Normal): 8 00:26:58.427 Atomic Write Unit (PFail): 8 00:26:58.427 Preferred Write Granularity: 8 00:26:58.427 Atomic Compare & Write Unit: 8 00:26:58.427 Atomic Boundary Size (Normal): 0 00:26:58.427 Atomic Boundary Size (PFail): 0 00:26:58.427 Atomic Boundary Offset: 0 00:26:58.427 NGUID/EUI64 Never Reused: No 00:26:58.427 ANA group ID: 1 00:26:58.427 Namespace Write Protected: No 00:26:58.427 Number of LBA Formats: 1 00:26:58.427 Current LBA Format: LBA Format #00 00:26:58.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:58.428 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:58.428 rmmod nvme_tcp 00:26:58.428 rmmod nvme_fabrics 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.428 12:00:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:00.975 12:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:04.275 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:04.275 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:04.275 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:04.275 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:04.275 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:04.275 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:04.275 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:04.275 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:04.275 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:04.275 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:04.276 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:04.276 00:27:04.276 real 0m18.756s 00:27:04.276 user 0m5.083s 00:27:04.276 sys 0m10.747s 00:27:04.276 12:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:04.276 12:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:04.276 ************************************ 00:27:04.276 END TEST nvmf_identify_kernel_target 00:27:04.276 ************************************ 00:27:04.276 12:00:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:04.276 12:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:04.276 12:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:04.276 12:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.276 ************************************ 00:27:04.276 START TEST nvmf_auth_host 00:27:04.276 ************************************ 00:27:04.276 12:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:04.544 * Looking for test storage... 00:27:04.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.544 12:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:04.544 12:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:04.544 12:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:04.544 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:04.544 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.545 --rc genhtml_branch_coverage=1 00:27:04.545 --rc genhtml_function_coverage=1 00:27:04.545 --rc genhtml_legend=1 00:27:04.545 --rc geninfo_all_blocks=1 00:27:04.545 --rc geninfo_unexecuted_blocks=1 00:27:04.545 00:27:04.545 ' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.545 --rc genhtml_branch_coverage=1 00:27:04.545 --rc genhtml_function_coverage=1 00:27:04.545 --rc genhtml_legend=1 00:27:04.545 --rc geninfo_all_blocks=1 00:27:04.545 --rc geninfo_unexecuted_blocks=1 00:27:04.545 00:27:04.545 ' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.545 --rc genhtml_branch_coverage=1 00:27:04.545 --rc genhtml_function_coverage=1 00:27:04.545 --rc genhtml_legend=1 00:27:04.545 --rc geninfo_all_blocks=1 00:27:04.545 --rc geninfo_unexecuted_blocks=1 00:27:04.545 00:27:04.545 ' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.545 --rc genhtml_branch_coverage=1 00:27:04.545 --rc genhtml_function_coverage=1 00:27:04.545 --rc genhtml_legend=1 00:27:04.545 --rc geninfo_all_blocks=1 00:27:04.545 --rc geninfo_unexecuted_blocks=1 00:27:04.545 00:27:04.545 ' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:04.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.545 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:04.546 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:04.546 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:04.546 12:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:12.837 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:12.837 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:12.837 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:12.837 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:12.837 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:12.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:27:12.838 00:27:12.838 --- 10.0.0.2 ping statistics --- 00:27:12.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.838 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:27:12.838 00:27:12.838 --- 10.0.0.1 ping statistics --- 00:27:12.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.838 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1168616 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1168616 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1168616 ']' 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:12.838 12:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.838 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.838 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:12.838 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:12.838 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:12.838 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=db7376535abea5bea066cda00134d819 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.lkK 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key db7376535abea5bea066cda00134d819 0 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 db7376535abea5bea066cda00134d819 0 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=db7376535abea5bea066cda00134d819 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.lkK 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.lkK 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.lkK 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=880087c801b564e30bfe8465df943416e5efa96b236ccc71c6b883f75824cb8e 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:13.099 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.wbV 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 880087c801b564e30bfe8465df943416e5efa96b236ccc71c6b883f75824cb8e 3 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 880087c801b564e30bfe8465df943416e5efa96b236ccc71c6b883f75824cb8e 3 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=880087c801b564e30bfe8465df943416e5efa96b236ccc71c6b883f75824cb8e 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.wbV 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.wbV 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wbV 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6a48bec83e0646bf3c5d74d579ed27425ce57e845c903a5e 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.UE0 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6a48bec83e0646bf3c5d74d579ed27425ce57e845c903a5e 0 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6a48bec83e0646bf3c5d74d579ed27425ce57e845c903a5e 0 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6a48bec83e0646bf3c5d74d579ed27425ce57e845c903a5e 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.UE0 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.UE0 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.UE0 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0d72d0ac6760d52b1d529feeab71377031ce7b58271d7544 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.6Ja 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0d72d0ac6760d52b1d529feeab71377031ce7b58271d7544 2 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0d72d0ac6760d52b1d529feeab71377031ce7b58271d7544 2 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0d72d0ac6760d52b1d529feeab71377031ce7b58271d7544 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:13.100 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.6Ja 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.6Ja 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6Ja 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=bdcb80a42d0decd7f0ed99bf3a5129ee 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.yLi 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key bdcb80a42d0decd7f0ed99bf3a5129ee 1 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 bdcb80a42d0decd7f0ed99bf3a5129ee 1 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=bdcb80a42d0decd7f0ed99bf3a5129ee 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.yLi 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.yLi 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yLi 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:13.361 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=85426f28bf0625ec5b621a74a0789236 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.br1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 85426f28bf0625ec5b621a74a0789236 1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 85426f28bf0625ec5b621a74a0789236 1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=85426f28bf0625ec5b621a74a0789236 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.br1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.br1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.br1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=72999d17846050597475311e2a48423ce01d5bdffdb53f52 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.j4O 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 72999d17846050597475311e2a48423ce01d5bdffdb53f52 2 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 72999d17846050597475311e2a48423ce01d5bdffdb53f52 2 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=72999d17846050597475311e2a48423ce01d5bdffdb53f52 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.j4O 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.j4O 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.j4O 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f37e4aa6213cfedd2e4809f66d1cd108 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.BCg 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f37e4aa6213cfedd2e4809f66d1cd108 0 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f37e4aa6213cfedd2e4809f66d1cd108 0 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f37e4aa6213cfedd2e4809f66d1cd108 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:13.362 12:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.BCg 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.BCg 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.BCg 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=09d19bef33690bc876b5e91b4ee22bb63ba3a2951b4c6e77c238b452b1474dd9 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Lxi 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 09d19bef33690bc876b5e91b4ee22bb63ba3a2951b4c6e77c238b452b1474dd9 3 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 09d19bef33690bc876b5e91b4ee22bb63ba3a2951b4c6e77c238b452b1474dd9 3 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=09d19bef33690bc876b5e91b4ee22bb63ba3a2951b4c6e77c238b452b1474dd9 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Lxi 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Lxi 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Lxi 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1168616 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1168616 ']' 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.623 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lkK 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wbV ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wbV 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.UE0 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6Ja ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6Ja 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yLi 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.br1 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.br1 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.j4O 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.BCg ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.BCg 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Lxi 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.887 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:13.888 12:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:17.188 Waiting for block devices as requested 00:27:17.188 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:17.449 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:17.449 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:17.449 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:17.709 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:17.709 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:17.709 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:17.969 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:17.969 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:18.229 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:18.229 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:18.229 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:18.229 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:18.490 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:18.490 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:18.490 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:18.490 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:19.433 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:27:19.433 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:19.433 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:27:19.433 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:19.433 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:19.434 No valid GPT data, bailing 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:19.434 00:27:19.434 Discovery Log Number of Records 2, Generation counter 2 00:27:19.434 =====Discovery Log Entry 0====== 00:27:19.434 trtype: tcp 00:27:19.434 adrfam: ipv4 00:27:19.434 subtype: current discovery subsystem 00:27:19.434 treq: not specified, sq flow control disable supported 00:27:19.434 portid: 1 00:27:19.434 trsvcid: 4420 00:27:19.434 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:19.434 traddr: 10.0.0.1 00:27:19.434 eflags: none 00:27:19.434 sectype: none 00:27:19.434 =====Discovery Log Entry 1====== 00:27:19.434 trtype: tcp 00:27:19.434 adrfam: ipv4 00:27:19.434 subtype: nvme subsystem 00:27:19.434 treq: not specified, sq flow control disable supported 00:27:19.434 portid: 1 00:27:19.434 trsvcid: 4420 00:27:19.434 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:19.434 traddr: 10.0.0.1 00:27:19.434 eflags: none 00:27:19.434 sectype: none 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.434 nvme0n1 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.434 12:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:19.434 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.435 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.696 nvme0n1 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.696 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.956 nvme0n1 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:19.956 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.957 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.217 nvme0n1 00:27:20.217 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.217 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.217 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.217 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.217 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.217 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.217 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.218 nvme0n1 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.218 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.479 12:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.479 nvme0n1 00:27:20.479 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.479 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.480 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.480 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.480 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.480 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.480 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.480 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.480 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.480 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.740 nvme0n1 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.740 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.741 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.001 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:21.001 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:21.001 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:21.001 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:21.001 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.001 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.001 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.001 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.001 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.002 nvme0n1 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.002 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.263 nvme0n1 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.263 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.264 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.264 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.264 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.264 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.264 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.524 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.524 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.524 12:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.524 nvme0n1 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.524 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.785 nvme0n1 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.785 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.046 nvme0n1 00:27:22.046 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.046 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.046 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.046 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.046 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.046 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:22.306 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.307 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.567 nvme0n1 00:27:22.567 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.567 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.567 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.567 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.567 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.567 12:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.567 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.828 nvme0n1 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:22.828 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.829 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.089 nvme0n1 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.089 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.090 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.350 nvme0n1 00:27:23.350 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.350 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.350 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.350 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.350 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.350 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.612 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.612 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.612 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.612 12:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.612 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.612 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.612 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.612 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:23.612 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.612 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.612 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.612 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.613 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.873 nvme0n1 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.873 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.874 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.134 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.134 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.134 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.134 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.134 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.135 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.395 nvme0n1 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.395 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.396 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.396 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.396 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.396 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.396 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.396 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.396 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.396 12:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.966 nvme0n1 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.966 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.967 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.227 nvme0n1 00:27:25.227 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.227 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.227 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.227 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.227 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.487 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.488 12:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.749 nvme0n1 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.749 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.009 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.009 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.009 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.009 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:26.009 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.009 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.009 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.009 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.009 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.010 12:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.581 nvme0n1 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:26.581 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.582 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.155 nvme0n1 00:27:27.155 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.155 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.155 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.155 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.155 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.155 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.416 12:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.989 nvme0n1 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.989 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.990 12:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.561 nvme0n1 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.561 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.822 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.394 nvme0n1 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.394 12:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.655 nvme0n1 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:29.655 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.656 nvme0n1 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.656 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.918 nvme0n1 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.918 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.179 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.180 nvme0n1 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.180 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 nvme0n1 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.441 12:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.441 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 nvme0n1 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.702 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.963 nvme0n1 00:27:30.963 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.964 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.225 nvme0n1 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.225 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.226 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.226 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.493 nvme0n1 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.494 12:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.494 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.495 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.495 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.757 nvme0n1 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.757 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.018 nvme0n1 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.018 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.279 nvme0n1 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.279 12:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.540 nvme0n1 00:27:32.540 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.540 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.540 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.540 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.540 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.540 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.801 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.802 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.063 nvme0n1 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.063 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.325 nvme0n1 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.325 12:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.895 nvme0n1 00:27:33.895 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.895 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.895 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.895 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.896 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.157 nvme0n1 00:27:34.157 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.157 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.157 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.157 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.157 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.157 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.417 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.418 12:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.678 nvme0n1 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.678 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.938 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.938 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.938 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.939 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.939 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.199 nvme0n1 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.199 12:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.770 nvme0n1 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.770 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.341 nvme0n1 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.341 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.342 12:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.282 nvme0n1 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.282 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.283 12:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.855 nvme0n1 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.855 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.427 nvme0n1 00:27:38.427 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.427 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.427 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.427 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.427 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.427 12:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.427 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.370 nvme0n1 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:39.370 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.371 nvme0n1 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.371 12:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.633 nvme0n1 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.633 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.893 nvme0n1 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:39.893 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.894 nvme0n1 00:27:39.894 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.155 nvme0n1 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.155 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.416 nvme0n1 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.416 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.417 12:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.417 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.417 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.417 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.417 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.417 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.677 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.678 nvme0n1 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.678 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.938 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.938 nvme0n1 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.939 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.199 nvme0n1 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.199 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.200 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.460 12:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.460 nvme0n1 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:41.460 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.461 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.721 nvme0n1 00:27:41.721 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.721 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.721 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.721 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.721 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.721 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.982 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.243 nvme0n1 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.243 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.244 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.244 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.244 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.504 nvme0n1 00:27:42.504 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.504 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.504 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.504 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.504 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.504 12:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.504 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.765 nvme0n1 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.765 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.766 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.027 nvme0n1 00:27:43.027 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.027 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.027 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.027 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.027 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.027 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:43.287 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.288 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:43.288 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:43.288 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:43.288 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.288 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.288 12:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.548 nvme0n1 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.548 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.808 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.808 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.808 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:43.808 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:43.808 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:43.808 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.808 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.808 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:43.808 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.809 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:43.809 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:43.809 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:43.809 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.809 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.809 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.069 nvme0n1 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.069 12:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.641 nvme0n1 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.641 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.902 nvme0n1 00:27:44.902 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.162 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.423 nvme0n1 00:27:45.423 12:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.423 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.423 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.423 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.423 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.423 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.423 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.423 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.423 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.423 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGI3Mzc2NTM1YWJlYTViZWEwNjZjZGEwMDEzNGQ4MTkCVt6o: 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: ]] 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODgwMDg3YzgwMWI1NjRlMzBiZmU4NDY1ZGY5NDM0MTZlNWVmYTk2YjIzNmNjYzcxYzZiODgzZjc1ODI0Y2I4ZeJCkEo=: 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.683 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.254 nvme0n1 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.254 12:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.825 nvme0n1 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.825 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.085 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.086 12:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.657 nvme0n1 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzI5OTlkMTc4NDYwNTA1OTc0NzUzMTFlMmE0ODQyM2NlMDFkNWJkZmZkYjUzZjUygzaf8Q==: 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: ]] 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjM3ZTRhYTYyMTNjZmVkZDJlNDgwOWY2NmQxY2QxMDgzIRaW: 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.657 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.229 nvme0n1 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.229 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDlkMTliZWYzMzY5MGJjODc2YjVlOTFiNGVlMjJiYjYzYmEzYTI5NTFiNGM2ZTc3YzIzOGI0NTJiMTQ3NGRkOT5ooHw=: 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.490 12:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 nvme0n1 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.061 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.062 request: 00:27:49.062 { 00:27:49.062 "name": "nvme0", 00:27:49.062 "trtype": "tcp", 00:27:49.062 "traddr": "10.0.0.1", 00:27:49.062 "adrfam": "ipv4", 00:27:49.062 "trsvcid": "4420", 00:27:49.062 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:49.062 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:49.062 "prchk_reftag": false, 00:27:49.062 "prchk_guard": false, 00:27:49.062 "hdgst": false, 00:27:49.062 "ddgst": false, 00:27:49.062 "allow_unrecognized_csi": false, 00:27:49.062 "method": "bdev_nvme_attach_controller", 00:27:49.062 "req_id": 1 00:27:49.062 } 00:27:49.062 Got JSON-RPC error response 00:27:49.062 response: 00:27:49.062 { 00:27:49.062 "code": -5, 00:27:49.062 "message": "Input/output error" 00:27:49.062 } 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.062 request: 00:27:49.062 { 00:27:49.062 "name": "nvme0", 00:27:49.062 "trtype": "tcp", 00:27:49.062 "traddr": "10.0.0.1", 00:27:49.062 "adrfam": "ipv4", 00:27:49.062 "trsvcid": "4420", 00:27:49.062 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:49.062 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:49.062 "prchk_reftag": false, 00:27:49.062 "prchk_guard": false, 00:27:49.062 "hdgst": false, 00:27:49.062 "ddgst": false, 00:27:49.062 "dhchap_key": "key2", 00:27:49.062 "allow_unrecognized_csi": false, 00:27:49.062 "method": "bdev_nvme_attach_controller", 00:27:49.062 "req_id": 1 00:27:49.062 } 00:27:49.062 Got JSON-RPC error response 00:27:49.062 response: 00:27:49.062 { 00:27:49.062 "code": -5, 00:27:49.062 "message": "Input/output error" 00:27:49.062 } 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.062 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.323 request: 00:27:49.323 { 00:27:49.323 "name": "nvme0", 00:27:49.323 "trtype": "tcp", 00:27:49.323 "traddr": "10.0.0.1", 00:27:49.323 "adrfam": "ipv4", 00:27:49.323 "trsvcid": "4420", 00:27:49.323 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:49.323 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:49.323 "prchk_reftag": false, 00:27:49.323 "prchk_guard": false, 00:27:49.323 "hdgst": false, 00:27:49.323 "ddgst": false, 00:27:49.323 "dhchap_key": "key1", 00:27:49.323 "dhchap_ctrlr_key": "ckey2", 00:27:49.323 "allow_unrecognized_csi": false, 00:27:49.323 "method": "bdev_nvme_attach_controller", 00:27:49.323 "req_id": 1 00:27:49.323 } 00:27:49.323 Got JSON-RPC error response 00:27:49.323 response: 00:27:49.323 { 00:27:49.323 "code": -5, 00:27:49.323 "message": "Input/output error" 00:27:49.323 } 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.323 nvme0n1 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.323 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.324 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:49.324 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:49.324 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:49.324 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.324 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.324 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.584 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.584 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.584 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:49.584 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.584 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.584 12:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.584 request: 00:27:49.584 { 00:27:49.584 "name": "nvme0", 00:27:49.584 "dhchap_key": "key1", 00:27:49.584 "dhchap_ctrlr_key": "ckey2", 00:27:49.584 "method": "bdev_nvme_set_keys", 00:27:49.584 "req_id": 1 00:27:49.584 } 00:27:49.584 Got JSON-RPC error response 00:27:49.584 response: 00:27:49.584 { 00:27:49.584 "code": -13, 00:27:49.584 "message": "Permission denied" 00:27:49.584 } 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:49.584 12:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:50.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:50.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.525 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE0OGJlYzgzZTA2NDZiZjNjNWQ3NGQ1NzllZDI3NDI1Y2U1N2U4NDVjOTAzYTVltRNb7A==: 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: ]] 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ3MmQwYWM2NzYwZDUyYjFkNTI5ZmVlYWI3MTM3NzAzMWNlN2I1ODI3MWQ3NTQ07iUqmQ==: 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.785 nvme0n1 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmRjYjgwYTQyZDBkZWNkN2YwZWQ5OWJmM2E1MTI5ZWVbv5s3: 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: ]] 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODU0MjZmMjhiZjA2MjVlYzViNjIxYTc0YTA3ODkyMzYOf/lu: 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.785 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.786 request: 00:27:50.786 { 00:27:50.786 "name": "nvme0", 00:27:50.786 "dhchap_key": "key2", 00:27:50.786 "dhchap_ctrlr_key": "ckey1", 00:27:50.786 "method": "bdev_nvme_set_keys", 00:27:50.786 "req_id": 1 00:27:50.786 } 00:27:50.786 Got JSON-RPC error response 00:27:50.786 response: 00:27:50.786 { 00:27:50.786 "code": -13, 00:27:50.786 "message": "Permission denied" 00:27:50.786 } 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.786 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.046 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:51.046 12:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:51.987 rmmod nvme_tcp 00:27:51.987 rmmod nvme_fabrics 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1168616 ']' 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1168616 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1168616 ']' 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1168616 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1168616 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1168616' 00:27:51.987 killing process with pid 1168616 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1168616 00:27:51.987 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1168616 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.248 12:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.161 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.161 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:54.161 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:54.161 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:54.161 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:54.161 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:54.421 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:54.421 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:54.422 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:54.422 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:54.422 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:54.422 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:54.422 12:01:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:57.722 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:57.722 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:57.983 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:57.983 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:57.983 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:57.983 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:57.983 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:57.983 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:57.983 12:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lkK /tmp/spdk.key-null.UE0 /tmp/spdk.key-sha256.yLi /tmp/spdk.key-sha384.j4O /tmp/spdk.key-sha512.Lxi /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:57.983 12:01:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:02.189 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:02.189 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:02.189 00:28:02.189 real 0m57.239s 00:28:02.189 user 0m51.362s 00:28:02.189 sys 0m15.385s 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.189 ************************************ 00:28:02.189 END TEST nvmf_auth_host 00:28:02.189 ************************************ 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.189 ************************************ 00:28:02.189 START TEST nvmf_digest 00:28:02.189 ************************************ 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:02.189 * Looking for test storage... 00:28:02.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:02.189 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:02.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.190 --rc genhtml_branch_coverage=1 00:28:02.190 --rc genhtml_function_coverage=1 00:28:02.190 --rc genhtml_legend=1 00:28:02.190 --rc geninfo_all_blocks=1 00:28:02.190 --rc geninfo_unexecuted_blocks=1 00:28:02.190 00:28:02.190 ' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:02.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.190 --rc genhtml_branch_coverage=1 00:28:02.190 --rc genhtml_function_coverage=1 00:28:02.190 --rc genhtml_legend=1 00:28:02.190 --rc geninfo_all_blocks=1 00:28:02.190 --rc geninfo_unexecuted_blocks=1 00:28:02.190 00:28:02.190 ' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:02.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.190 --rc genhtml_branch_coverage=1 00:28:02.190 --rc genhtml_function_coverage=1 00:28:02.190 --rc genhtml_legend=1 00:28:02.190 --rc geninfo_all_blocks=1 00:28:02.190 --rc geninfo_unexecuted_blocks=1 00:28:02.190 00:28:02.190 ' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:02.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:02.190 --rc genhtml_branch_coverage=1 00:28:02.190 --rc genhtml_function_coverage=1 00:28:02.190 --rc genhtml_legend=1 00:28:02.190 --rc geninfo_all_blocks=1 00:28:02.190 --rc geninfo_unexecuted_blocks=1 00:28:02.190 00:28:02.190 ' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:02.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:02.190 12:01:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.326 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:10.327 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:10.327 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:10.327 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:10.327 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:10.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:28:10.327 00:28:10.327 --- 10.0.0.2 ping statistics --- 00:28:10.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.327 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:28:10.327 00:28:10.327 --- 10.0.0.1 ping statistics --- 00:28:10.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.327 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.327 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:10.328 ************************************ 00:28:10.328 START TEST nvmf_digest_clean 00:28:10.328 ************************************ 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1184882 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1184882 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1184882 ']' 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.328 12:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.328 [2024-10-11 12:01:53.987580] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:10.328 [2024-10-11 12:01:53.987636] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.328 [2024-10-11 12:01:54.076360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.328 [2024-10-11 12:01:54.127214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.328 [2024-10-11 12:01:54.127269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.328 [2024-10-11 12:01:54.127278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.328 [2024-10-11 12:01:54.127285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.328 [2024-10-11 12:01:54.127292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.328 [2024-10-11 12:01:54.128096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.328 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.328 null0 00:28:10.328 [2024-10-11 12:01:54.954415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.590 [2024-10-11 12:01:54.978755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1185058 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1185058 /var/tmp/bperf.sock 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1185058 ']' 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.590 12:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.590 [2024-10-11 12:01:55.038696] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:10.590 [2024-10-11 12:01:55.038767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185058 ] 00:28:10.590 [2024-10-11 12:01:55.120844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.590 [2024-10-11 12:01:55.173379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.531 12:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:11.531 12:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:11.531 12:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:11.531 12:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:11.531 12:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:11.531 12:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.532 12:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.792 nvme0n1 00:28:12.053 12:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:12.053 12:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.053 Running I/O for 2 seconds... 00:28:13.937 18809.00 IOPS, 73.47 MiB/s [2024-10-11T10:01:58.569Z] 19875.50 IOPS, 77.64 MiB/s 00:28:13.937 Latency(us) 00:28:13.937 [2024-10-11T10:01:58.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.937 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:13.937 nvme0n1 : 2.01 19880.01 77.66 0.00 0.00 6430.12 3044.69 23920.64 00:28:13.937 [2024-10-11T10:01:58.569Z] =================================================================================================================== 00:28:13.937 [2024-10-11T10:01:58.569Z] Total : 19880.01 77.66 0.00 0.00 6430.12 3044.69 23920.64 00:28:13.937 { 00:28:13.937 "results": [ 00:28:13.937 { 00:28:13.937 "job": "nvme0n1", 00:28:13.937 "core_mask": "0x2", 00:28:13.937 "workload": "randread", 00:28:13.937 "status": "finished", 00:28:13.937 "queue_depth": 128, 00:28:13.937 "io_size": 4096, 00:28:13.937 "runtime": 2.007393, 00:28:13.937 "iops": 19880.013529986405, 00:28:13.937 "mibps": 77.6563028515094, 00:28:13.937 "io_failed": 0, 00:28:13.937 "io_timeout": 0, 00:28:13.937 "avg_latency_us": 6430.115971634049, 00:28:13.937 "min_latency_us": 3044.693333333333, 00:28:13.937 "max_latency_us": 23920.64 00:28:13.937 } 00:28:13.937 ], 00:28:13.937 "core_count": 1 00:28:13.937 } 00:28:13.937 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:14.197 | select(.opcode=="crc32c") 00:28:14.197 | "\(.module_name) \(.executed)"' 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1185058 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1185058 ']' 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1185058 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1185058 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1185058' 00:28:14.197 killing process with pid 1185058 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1185058 00:28:14.197 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.197 00:28:14.197 Latency(us) 00:28:14.197 [2024-10-11T10:01:58.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.197 [2024-10-11T10:01:58.829Z] =================================================================================================================== 00:28:14.197 [2024-10-11T10:01:58.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.197 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1185058 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1185893 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1185893 /var/tmp/bperf.sock 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1185893 ']' 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:14.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:14.458 12:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.458 [2024-10-11 12:01:58.974131] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:14.458 [2024-10-11 12:01:58.974189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185893 ] 00:28:14.458 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.458 Zero copy mechanism will not be used. 00:28:14.458 [2024-10-11 12:01:59.050626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.458 [2024-10-11 12:01:59.085942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.399 12:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.399 12:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:15.399 12:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:15.399 12:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:15.399 12:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:15.399 12:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.399 12:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.971 nvme0n1 00:28:15.971 12:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:15.971 12:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:15.971 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:15.971 Zero copy mechanism will not be used. 00:28:15.971 Running I/O for 2 seconds... 00:28:17.939 3137.00 IOPS, 392.12 MiB/s [2024-10-11T10:02:02.571Z] 4017.00 IOPS, 502.12 MiB/s 00:28:17.939 Latency(us) 00:28:17.939 [2024-10-11T10:02:02.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.939 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:17.939 nvme0n1 : 2.01 4014.60 501.82 0.00 0.00 3982.54 563.20 11304.96 00:28:17.939 [2024-10-11T10:02:02.571Z] =================================================================================================================== 00:28:17.939 [2024-10-11T10:02:02.571Z] Total : 4014.60 501.82 0.00 0.00 3982.54 563.20 11304.96 00:28:17.939 { 00:28:17.939 "results": [ 00:28:17.939 { 00:28:17.939 "job": "nvme0n1", 00:28:17.939 "core_mask": "0x2", 00:28:17.939 "workload": "randread", 00:28:17.939 "status": "finished", 00:28:17.939 "queue_depth": 16, 00:28:17.939 "io_size": 131072, 00:28:17.939 "runtime": 2.005182, 00:28:17.939 "iops": 4014.5981761256585, 00:28:17.939 "mibps": 501.8247720157073, 00:28:17.939 "io_failed": 0, 00:28:17.939 "io_timeout": 0, 00:28:17.939 "avg_latency_us": 3982.537195859213, 00:28:17.939 "min_latency_us": 563.2, 00:28:17.939 "max_latency_us": 11304.96 00:28:17.939 } 00:28:17.939 ], 00:28:17.939 "core_count": 1 00:28:17.939 } 00:28:17.939 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:17.939 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:17.939 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:17.939 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:17.939 | select(.opcode=="crc32c") 00:28:17.939 | "\(.module_name) \(.executed)"' 00:28:17.939 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1185893 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1185893 ']' 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1185893 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1185893 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:18.272 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1185893' 00:28:18.273 killing process with pid 1185893 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1185893 00:28:18.273 Received shutdown signal, test time was about 2.000000 seconds 00:28:18.273 00:28:18.273 Latency(us) 00:28:18.273 [2024-10-11T10:02:02.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.273 [2024-10-11T10:02:02.905Z] =================================================================================================================== 00:28:18.273 [2024-10-11T10:02:02.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1185893 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1186605 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1186605 /var/tmp/bperf.sock 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1186605 ']' 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:18.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:18.273 12:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:18.534 [2024-10-11 12:02:02.878389] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:18.534 [2024-10-11 12:02:02.878442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186605 ] 00:28:18.534 [2024-10-11 12:02:02.944073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.534 [2024-10-11 12:02:02.973106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.534 12:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.534 12:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:18.534 12:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:18.534 12:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:18.534 12:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:18.794 12:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.794 12:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.054 nvme0n1 00:28:19.054 12:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:19.054 12:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:19.054 Running I/O for 2 seconds... 00:28:21.382 29616.00 IOPS, 115.69 MiB/s [2024-10-11T10:02:06.014Z] 29628.00 IOPS, 115.73 MiB/s 00:28:21.382 Latency(us) 00:28:21.382 [2024-10-11T10:02:06.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.382 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:21.383 nvme0n1 : 2.00 29627.29 115.73 0.00 0.00 4313.50 2061.65 9557.33 00:28:21.383 [2024-10-11T10:02:06.015Z] =================================================================================================================== 00:28:21.383 [2024-10-11T10:02:06.015Z] Total : 29627.29 115.73 0.00 0.00 4313.50 2061.65 9557.33 00:28:21.383 { 00:28:21.383 "results": [ 00:28:21.383 { 00:28:21.383 "job": "nvme0n1", 00:28:21.383 "core_mask": "0x2", 00:28:21.383 "workload": "randwrite", 00:28:21.383 "status": "finished", 00:28:21.383 "queue_depth": 128, 00:28:21.383 "io_size": 4096, 00:28:21.383 "runtime": 2.004098, 00:28:21.383 "iops": 29627.293675259392, 00:28:21.383 "mibps": 115.731615918982, 00:28:21.383 "io_failed": 0, 00:28:21.383 "io_timeout": 0, 00:28:21.383 "avg_latency_us": 4313.500547920597, 00:28:21.383 "min_latency_us": 2061.653333333333, 00:28:21.383 "max_latency_us": 9557.333333333334 00:28:21.383 } 00:28:21.383 ], 00:28:21.383 "core_count": 1 00:28:21.383 } 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:21.383 | select(.opcode=="crc32c") 00:28:21.383 | "\(.module_name) \(.executed)"' 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1186605 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1186605 ']' 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1186605 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1186605 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1186605' 00:28:21.383 killing process with pid 1186605 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1186605 00:28:21.383 Received shutdown signal, test time was about 2.000000 seconds 00:28:21.383 00:28:21.383 Latency(us) 00:28:21.383 [2024-10-11T10:02:06.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.383 [2024-10-11T10:02:06.015Z] =================================================================================================================== 00:28:21.383 [2024-10-11T10:02:06.015Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1186605 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1187245 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1187245 /var/tmp/bperf.sock 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1187245 ']' 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:21.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:21.383 12:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:21.643 [2024-10-11 12:02:06.016983] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:21.643 [2024-10-11 12:02:06.017043] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187245 ] 00:28:21.643 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.643 Zero copy mechanism will not be used. 00:28:21.643 [2024-10-11 12:02:06.092036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.643 [2024-10-11 12:02:06.121325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.213 12:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:22.213 12:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:22.213 12:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:22.213 12:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:22.213 12:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:22.472 12:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.472 12:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.733 nvme0n1 00:28:22.992 12:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:22.992 12:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.992 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:22.992 Zero copy mechanism will not be used. 00:28:22.992 Running I/O for 2 seconds... 00:28:24.873 4875.00 IOPS, 609.38 MiB/s [2024-10-11T10:02:09.505Z] 5081.50 IOPS, 635.19 MiB/s 00:28:24.873 Latency(us) 00:28:24.873 [2024-10-11T10:02:09.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.873 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:24.873 nvme0n1 : 2.00 5079.70 634.96 0.00 0.00 3145.50 1242.45 14964.05 00:28:24.873 [2024-10-11T10:02:09.505Z] =================================================================================================================== 00:28:24.873 [2024-10-11T10:02:09.505Z] Total : 5079.70 634.96 0.00 0.00 3145.50 1242.45 14964.05 00:28:24.873 { 00:28:24.873 "results": [ 00:28:24.873 { 00:28:24.873 "job": "nvme0n1", 00:28:24.873 "core_mask": "0x2", 00:28:24.873 "workload": "randwrite", 00:28:24.873 "status": "finished", 00:28:24.873 "queue_depth": 16, 00:28:24.873 "io_size": 131072, 00:28:24.873 "runtime": 2.003859, 00:28:24.873 "iops": 5079.698721317219, 00:28:24.873 "mibps": 634.9623401646523, 00:28:24.873 "io_failed": 0, 00:28:24.873 "io_timeout": 0, 00:28:24.873 "avg_latency_us": 3145.4995055178965, 00:28:24.873 "min_latency_us": 1242.4533333333334, 00:28:24.873 "max_latency_us": 14964.053333333333 00:28:24.873 } 00:28:24.873 ], 00:28:24.873 "core_count": 1 00:28:24.873 } 00:28:24.873 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:24.873 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:24.873 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:24.873 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:24.873 | select(.opcode=="crc32c") 00:28:24.873 | "\(.module_name) \(.executed)"' 00:28:24.873 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1187245 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1187245 ']' 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1187245 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1187245 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1187245' 00:28:25.133 killing process with pid 1187245 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1187245 00:28:25.133 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.133 00:28:25.133 Latency(us) 00:28:25.133 [2024-10-11T10:02:09.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.133 [2024-10-11T10:02:09.765Z] =================================================================================================================== 00:28:25.133 [2024-10-11T10:02:09.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.133 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1187245 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1184882 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1184882 ']' 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1184882 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1184882 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1184882' 00:28:25.394 killing process with pid 1184882 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1184882 00:28:25.394 12:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1184882 00:28:25.394 00:28:25.394 real 0m16.083s 00:28:25.394 user 0m31.642s 00:28:25.394 sys 0m3.761s 00:28:25.394 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:25.394 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:25.394 ************************************ 00:28:25.394 END TEST nvmf_digest_clean 00:28:25.394 ************************************ 00:28:25.656 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:25.657 ************************************ 00:28:25.657 START TEST nvmf_digest_error 00:28:25.657 ************************************ 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1187997 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1187997 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1187997 ']' 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:25.657 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.657 [2024-10-11 12:02:10.144847] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:25.657 [2024-10-11 12:02:10.144897] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.657 [2024-10-11 12:02:10.226021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.657 [2024-10-11 12:02:10.256739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.657 [2024-10-11 12:02:10.256771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.657 [2024-10-11 12:02:10.256777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.657 [2024-10-11 12:02:10.256781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.657 [2024-10-11 12:02:10.256785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.657 [2024-10-11 12:02:10.257262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.598 [2024-10-11 12:02:10.995324] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.598 12:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.598 null0 00:28:26.598 [2024-10-11 12:02:11.072906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.598 [2024-10-11 12:02:11.097099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1188233 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1188233 /var/tmp/bperf.sock 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1188233 ']' 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:26.598 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.598 [2024-10-11 12:02:11.152863] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:26.598 [2024-10-11 12:02:11.152911] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188233 ] 00:28:26.598 [2024-10-11 12:02:11.227183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.859 [2024-10-11 12:02:11.256926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.859 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:26.859 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:26.859 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.859 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:27.119 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:27.119 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.119 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.119 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.119 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.119 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.379 nvme0n1 00:28:27.379 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:27.379 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.379 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.379 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.379 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:27.379 12:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:27.641 Running I/O for 2 seconds... 00:28:27.641 [2024-10-11 12:02:12.049291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.049322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.049331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.060601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.060621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.060628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.071517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.071535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.071542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.080354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.080373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.080380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.090203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.090220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.090227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.099453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.099470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.099477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.107133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.107150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.107157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.116323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.116340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.116347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.125140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.125164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.125171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.135146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.135163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.135169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.143474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.143491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.143497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.152436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.152453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.152460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.161327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.161344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.161350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.170173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.170190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.170197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.179375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.179392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.179399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.188341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.188358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.188365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.197715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.197732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.197738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.641 [2024-10-11 12:02:12.205585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.641 [2024-10-11 12:02:12.205602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.641 [2024-10-11 12:02:12.205609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.642 [2024-10-11 12:02:12.216999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.642 [2024-10-11 12:02:12.217017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.642 [2024-10-11 12:02:12.217023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.642 [2024-10-11 12:02:12.229168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.642 [2024-10-11 12:02:12.229185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.642 [2024-10-11 12:02:12.229192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.642 [2024-10-11 12:02:12.239169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.642 [2024-10-11 12:02:12.239188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.642 [2024-10-11 12:02:12.239199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.642 [2024-10-11 12:02:12.247335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.642 [2024-10-11 12:02:12.247353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.642 [2024-10-11 12:02:12.247359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.642 [2024-10-11 12:02:12.257908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.642 [2024-10-11 12:02:12.257925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.642 [2024-10-11 12:02:12.257931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.642 [2024-10-11 12:02:12.266996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.642 [2024-10-11 12:02:12.267013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.642 [2024-10-11 12:02:12.267020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.903 [2024-10-11 12:02:12.276004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.276021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.276028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.284648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.284671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.284681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.293089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.293106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.293113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.302639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.302656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.302662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.310965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.310982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.310989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.322716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.322734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.322740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.333143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.333161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.333171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.340549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.340566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.340572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.349941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.349959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.349965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.359353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.359370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.359377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.367336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.367354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.367360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.377353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.377370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.377376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.385637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.385654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.385660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.395977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.395995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.396001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.407965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.407982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.407988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.415932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.415949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.415955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.425190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.425207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.425214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.433626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.433644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.433650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.444204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.444221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.444230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.452371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.452388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.452394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.463889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.463907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.463913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.472409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.472426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.472434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.480616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.480632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.480639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.490648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.490666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.490676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.500751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.500769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.500775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.509762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.509779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.509786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.517893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.517910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.517916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:27.904 [2024-10-11 12:02:12.527846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:27.904 [2024-10-11 12:02:12.527867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.904 [2024-10-11 12:02:12.527873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.538096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.538118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.538125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.546846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.546863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.546869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.557692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.557709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.557715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.568728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.568745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.568751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.577030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.577047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.577054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.586701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.586719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.586725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.598676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.598694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.598701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.610573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.610591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.610597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.618550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.618567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.618574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.628872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.628889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.628895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.638715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.638733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.638739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.646483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.646499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.646506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.656932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.656950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.656957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.666224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.666241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.666247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.674277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.674295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.674301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.684271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.684289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.684295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.693949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.693966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.693976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.703062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.703080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.703086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.712866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.712884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.712891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.721713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.721731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.721737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.730686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.166 [2024-10-11 12:02:12.730703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.166 [2024-10-11 12:02:12.730709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.166 [2024-10-11 12:02:12.739270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.167 [2024-10-11 12:02:12.739288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.167 [2024-10-11 12:02:12.739294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.167 [2024-10-11 12:02:12.748551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.167 [2024-10-11 12:02:12.748568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.167 [2024-10-11 12:02:12.748575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.167 [2024-10-11 12:02:12.756941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.167 [2024-10-11 12:02:12.756959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.167 [2024-10-11 12:02:12.756965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.167 [2024-10-11 12:02:12.765901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.167 [2024-10-11 12:02:12.765918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.167 [2024-10-11 12:02:12.765925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.167 [2024-10-11 12:02:12.775398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.167 [2024-10-11 12:02:12.775415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.167 [2024-10-11 12:02:12.775421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.167 [2024-10-11 12:02:12.783251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.167 [2024-10-11 12:02:12.783268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.167 [2024-10-11 12:02:12.783275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.167 [2024-10-11 12:02:12.792613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.167 [2024-10-11 12:02:12.792631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.167 [2024-10-11 12:02:12.792637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.800665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.800688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.800694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.810367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.810383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.810390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.820042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.820059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.820065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.829450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.829468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.829475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.837259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.837276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.837282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.846831] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.846850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.846860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.855318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.855336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.855344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.863882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.863898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.863905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.872919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.872936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.872942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.881803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.881820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.881826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.890293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.890310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.890316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.899243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.899260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.899266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.908614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.908631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.908637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.917324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.917342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.917348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.925805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.925825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.925832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.934690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.934707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.934714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.944320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.944338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.944344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.952977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.952994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.953001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.961625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.961642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.961649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.970629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.970647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.970653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.979043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.979061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.979067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.989074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.989091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.989097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:12.997654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:12.997676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:12.997682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:13.006302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:13.006319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.429 [2024-10-11 12:02:13.006325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.429 [2024-10-11 12:02:13.015428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.429 [2024-10-11 12:02:13.015445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.430 [2024-10-11 12:02:13.015451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.430 [2024-10-11 12:02:13.024784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.430 [2024-10-11 12:02:13.024800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.430 [2024-10-11 12:02:13.024807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.430 27243.00 IOPS, 106.42 MiB/s [2024-10-11T10:02:13.062Z] [2024-10-11 12:02:13.033989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.430 [2024-10-11 12:02:13.034006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.430 [2024-10-11 12:02:13.034012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.430 [2024-10-11 12:02:13.043965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.430 [2024-10-11 12:02:13.043981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.430 [2024-10-11 12:02:13.043988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.430 [2024-10-11 12:02:13.051912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.430 [2024-10-11 12:02:13.051929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.430 [2024-10-11 12:02:13.051936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.061563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.061581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.061587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.069332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.069349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.069356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.078982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.079003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.079011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.088084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.088101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.088107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.096884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.096902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.096908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.106433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.106451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.106458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.114274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.114291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.114297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.123926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.123942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.123948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.134710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.134726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.134733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.142501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.142518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.142524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.152385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.152401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.152407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.160661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.160684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.691 [2024-10-11 12:02:13.160690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.691 [2024-10-11 12:02:13.170594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.691 [2024-10-11 12:02:13.170611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.170619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.179085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.179103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.179109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.188452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.188469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.188475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.197429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.197446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.197452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.206472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.206489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.206495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.215505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.215524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.215534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.223778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.223795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.223801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.232595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.232612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.232622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.242127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.242144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.242151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.251555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.251572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.251579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.259153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.259170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.259176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.270418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.270435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.270442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.280810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.280827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.280833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.288826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.288842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.288848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.297574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.297591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.297597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.307023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.307040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.307047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.692 [2024-10-11 12:02:13.315386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.692 [2024-10-11 12:02:13.315409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.692 [2024-10-11 12:02:13.315415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.324661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.324682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.324688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.333308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.333325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.333331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.341991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.342008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.342014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.352062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.352079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.352085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.360853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.360870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.360876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.369097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.369115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.369121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.378950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.378967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.378973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.388249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.388266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.388272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.397259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.397276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.397282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.406288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.406305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.406311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.953 [2024-10-11 12:02:13.413810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.953 [2024-10-11 12:02:13.413827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.953 [2024-10-11 12:02:13.413833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.423097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.423114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.423121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.432509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.432526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.432532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.441264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.441281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.441287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.450626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.450642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.450649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.458762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.458779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.458785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.468481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.468498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.468507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.476660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.476681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.476687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.486008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.486025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.486031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.494759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.494776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.494782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.503317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.503334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.503340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.512538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.512554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.512560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.521449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.521465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.521471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.529744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.529761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.529767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.538227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.538243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.538249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.548247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.548264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.548271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.557671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.557688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.557694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.566286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.566304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.566311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:28.954 [2024-10-11 12:02:13.574134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:28.954 [2024-10-11 12:02:13.574151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.954 [2024-10-11 12:02:13.574158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.215 [2024-10-11 12:02:13.584575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.215 [2024-10-11 12:02:13.584592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.215 [2024-10-11 12:02:13.584598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.593422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.593438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.593445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.602350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.602367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.602373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.614117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.614135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.614141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.621764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.621781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.621791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.631927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.631944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.631950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.641994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.642011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.642017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.650584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.650601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.650607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.659311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.659328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.659334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.667665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.667686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.667692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.677625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.677642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.677648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.686688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.686705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.686711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.695431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.695447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.695453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.703671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.703690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.703697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.712976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.712993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.712999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.722455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.722472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.722478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.730349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.730366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.730373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.740102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.740120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.740127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.749074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.749091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.749097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.757939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.757955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.757962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.767091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.767108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.767114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.776008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.776024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.776030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.785296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.785313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.785320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.793124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.793140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.793146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.803942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.803959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.803965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.815557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.815575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.815582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.824911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.824928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.824935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.833544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.833561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.833568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.216 [2024-10-11 12:02:13.842205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.216 [2024-10-11 12:02:13.842223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.216 [2024-10-11 12:02:13.842229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.850942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.850960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.850966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.860467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.860483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.860493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.868773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.868790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.868796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.877474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.877491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.877501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.886473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.886490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.886496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.895841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.895858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.895865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.904593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.904610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.904616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.913634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.913651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.913657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.921843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.921861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.921867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.931227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.931244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.931250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.940171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.940188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.940194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.478 [2024-10-11 12:02:13.948134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.478 [2024-10-11 12:02:13.948151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.478 [2024-10-11 12:02:13.948157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 [2024-10-11 12:02:13.957688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.479 [2024-10-11 12:02:13.957705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.479 [2024-10-11 12:02:13.957712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 [2024-10-11 12:02:13.966757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.479 [2024-10-11 12:02:13.966774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.479 [2024-10-11 12:02:13.966781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 [2024-10-11 12:02:13.975092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.479 [2024-10-11 12:02:13.975108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.479 [2024-10-11 12:02:13.975114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 [2024-10-11 12:02:13.983838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.479 [2024-10-11 12:02:13.983854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.479 [2024-10-11 12:02:13.983861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 [2024-10-11 12:02:13.993994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.479 [2024-10-11 12:02:13.994011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.479 [2024-10-11 12:02:13.994018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 [2024-10-11 12:02:14.004017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.479 [2024-10-11 12:02:14.004035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.479 [2024-10-11 12:02:14.004041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 [2024-10-11 12:02:14.012355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.479 [2024-10-11 12:02:14.012372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.479 [2024-10-11 12:02:14.012381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 [2024-10-11 12:02:14.020693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.479 [2024-10-11 12:02:14.020710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.479 [2024-10-11 12:02:14.020716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 [2024-10-11 12:02:14.030320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12c68b0) 00:28:29.479 [2024-10-11 12:02:14.030336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.479 [2024-10-11 12:02:14.030343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.479 27724.50 IOPS, 108.30 MiB/s 00:28:29.479 Latency(us) 00:28:29.479 [2024-10-11T10:02:14.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.479 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:29.479 nvme0n1 : 2.00 27740.94 108.36 0.00 0.00 4610.07 2225.49 16820.91 00:28:29.479 [2024-10-11T10:02:14.111Z] =================================================================================================================== 00:28:29.479 [2024-10-11T10:02:14.111Z] Total : 27740.94 108.36 0.00 0.00 4610.07 2225.49 16820.91 00:28:29.479 { 00:28:29.479 "results": [ 00:28:29.479 { 00:28:29.479 "job": "nvme0n1", 00:28:29.479 "core_mask": "0x2", 00:28:29.479 "workload": "randread", 00:28:29.479 "status": "finished", 00:28:29.479 "queue_depth": 128, 00:28:29.479 "io_size": 4096, 00:28:29.479 "runtime": 2.003429, 00:28:29.479 "iops": 27740.93816152207, 00:28:29.479 "mibps": 108.36303969344559, 00:28:29.479 "io_failed": 0, 00:28:29.479 "io_timeout": 0, 00:28:29.479 "avg_latency_us": 4610.074642388039, 00:28:29.479 "min_latency_us": 2225.4933333333333, 00:28:29.479 "max_latency_us": 16820.906666666666 00:28:29.479 } 00:28:29.479 ], 00:28:29.479 "core_count": 1 00:28:29.479 } 00:28:29.479 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:29.479 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:29.479 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:29.479 | .driver_specific 00:28:29.479 | .nvme_error 00:28:29.479 | .status_code 00:28:29.479 | .command_transient_transport_error' 00:28:29.479 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1188233 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1188233 ']' 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1188233 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1188233 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1188233' 00:28:29.739 killing process with pid 1188233 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1188233 00:28:29.739 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.739 00:28:29.739 Latency(us) 00:28:29.739 [2024-10-11T10:02:14.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.739 [2024-10-11T10:02:14.371Z] =================================================================================================================== 00:28:29.739 [2024-10-11T10:02:14.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.739 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1188233 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1188837 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1188837 /var/tmp/bperf.sock 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1188837 ']' 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:29.999 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.999 [2024-10-11 12:02:14.473543] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:29.999 [2024-10-11 12:02:14.473602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188837 ] 00:28:29.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.999 Zero copy mechanism will not be used. 00:28:29.999 [2024-10-11 12:02:14.546823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.999 [2024-10-11 12:02:14.576027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.260 12:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.830 nvme0n1 00:28:30.830 12:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:30.830 12:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.830 12:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.830 12:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.830 12:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:30.830 12:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.830 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:30.830 Zero copy mechanism will not be used. 00:28:30.830 Running I/O for 2 seconds... 00:28:30.830 [2024-10-11 12:02:15.309046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.309077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.309086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.320212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.320234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.320241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.330998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.331016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.331023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.341299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.341317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.341323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.351325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.351343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.351350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.361698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.361722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.361728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.371839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.371856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.371863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.383039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.383057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.383063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.394193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.394210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.394216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.402326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.402343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.402350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.411346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.411363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.411369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.422044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.422061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.422067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.432719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.432736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.432742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.443557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.443575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.443581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.830 [2024-10-11 12:02:15.452837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:30.830 [2024-10-11 12:02:15.452854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.830 [2024-10-11 12:02:15.452861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.462288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.462305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.462312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.474359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.474377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.474383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.486116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.486134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.486140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.500001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.500018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.500025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.510549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.510566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.510573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.519630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.519647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.519653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.530641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.530658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.530665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.540963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.540984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.540990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.552835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.552852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.552858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.565925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.565941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.565947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.577694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.577711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.577717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.589897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.589913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.589920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.602683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.602701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.602707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.615298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.615315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.615321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.627897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.627914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.627920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.639808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.639826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.639832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.652965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.652982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.652989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.664906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.664923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.664929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.677471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.677488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.677494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.689749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.689766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.689772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.701950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.701966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.701973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.091 [2024-10-11 12:02:15.714702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.091 [2024-10-11 12:02:15.714718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.091 [2024-10-11 12:02:15.714724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.726861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.726878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.726884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.739465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.739482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.739488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.751876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.751892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.751902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.764725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.764742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.764749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.776613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.776630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.776636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.788465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.788483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.788489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.799108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.799124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.799131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.810846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.810869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.821990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.822008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.822014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.832302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.832319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.832325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.843847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.843864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.843871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.855107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.855127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.855133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.865053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.865070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.865076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.875444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.875462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.875468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.885371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.885389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.885395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.896093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.896110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.896117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.907372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.907389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.907395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.918142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.918160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.918166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.928932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.928949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.928955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.940585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.940602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.940608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.949258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.949275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.949281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.353 [2024-10-11 12:02:15.959068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.353 [2024-10-11 12:02:15.959086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.353 [2024-10-11 12:02:15.959092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.354 [2024-10-11 12:02:15.969944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.354 [2024-10-11 12:02:15.969962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.354 [2024-10-11 12:02:15.969968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.354 [2024-10-11 12:02:15.981702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.354 [2024-10-11 12:02:15.981720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.354 [2024-10-11 12:02:15.981726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:15.991935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:15.991953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:15.991960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.002235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.002253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.002260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.014181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.014199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.014205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.024922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.024939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.024946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.036858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.036875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.036885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.046900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.046918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.046925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.058829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.058847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.058853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.070476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.070494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.070500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.082432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.082449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.082455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.094745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.094762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.094768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.106534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.106552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.106558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.119192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.119209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.119215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.131484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.131502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.131508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.143958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.143976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.143982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.155444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.155462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.155468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.167486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.167503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.167509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.180291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.615 [2024-10-11 12:02:16.180308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.615 [2024-10-11 12:02:16.180314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.615 [2024-10-11 12:02:16.192558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.616 [2024-10-11 12:02:16.192575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-10-11 12:02:16.192581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.616 [2024-10-11 12:02:16.205214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.616 [2024-10-11 12:02:16.205231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-10-11 12:02:16.205238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.616 [2024-10-11 12:02:16.218005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.616 [2024-10-11 12:02:16.218022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-10-11 12:02:16.218028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.616 [2024-10-11 12:02:16.229988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.616 [2024-10-11 12:02:16.230005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-10-11 12:02:16.230011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.616 [2024-10-11 12:02:16.242060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.616 [2024-10-11 12:02:16.242077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.616 [2024-10-11 12:02:16.242087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.877 [2024-10-11 12:02:16.251917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.877 [2024-10-11 12:02:16.251934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-10-11 12:02:16.251940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.877 [2024-10-11 12:02:16.260724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.877 [2024-10-11 12:02:16.260741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-10-11 12:02:16.260747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.877 [2024-10-11 12:02:16.268204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.877 [2024-10-11 12:02:16.268221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.877 [2024-10-11 12:02:16.268228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.279099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.279117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.279123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.289481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.289498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.289505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.878 2728.00 IOPS, 341.00 MiB/s [2024-10-11T10:02:16.510Z] [2024-10-11 12:02:16.301435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.301454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.301460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.311992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.312010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.312016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.323178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.323196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.323202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.335230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.335252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.335258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.347341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.347359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.347365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.357357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.357375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.357381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.367982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.368006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.379423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.379441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.379448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.391940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.391958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.391964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.402513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.402531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.402537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.414147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.414167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.414173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.423384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.423403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.423409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.434865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.434884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.434890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.446541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.446558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.446565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.458424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.458442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.458448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.469909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.469928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.469934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.480901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.480920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.480926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.489755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.489774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.489780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.498984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.499002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.499009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.878 [2024-10-11 12:02:16.508360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:31.878 [2024-10-11 12:02:16.508378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.878 [2024-10-11 12:02:16.508384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.517374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.517393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.517402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.526745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.526762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.526769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.537353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.537372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.537378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.548038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.548056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.548062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.558277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.558295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.558301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.569709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.569727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.569733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.581645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.581663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.581674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.593023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.593042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.593048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.603734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.603752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.603758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.614801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.614823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.614829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.624264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.624283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.624289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.628477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.628496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.628502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.637903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.140 [2024-10-11 12:02:16.637920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.140 [2024-10-11 12:02:16.637926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.140 [2024-10-11 12:02:16.648933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.648951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.648958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.660259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.660278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.660284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.670167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.670185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.670191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.682431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.682449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.682455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.693403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.693421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.693427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.704917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.704935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.704941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.716131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.716149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.716156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.724717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.724735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.724741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.732393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.732411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.732417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.736508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.736526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.736532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.743878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.743894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.743901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.754215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.754233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.754239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.141 [2024-10-11 12:02:16.765395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.141 [2024-10-11 12:02:16.765412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.141 [2024-10-11 12:02:16.765418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.776910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.776927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.776940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.788894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.788913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.788919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.801437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.801455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.801461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.813806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.813825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.813831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.825075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.825093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.825099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.835992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.836010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.836016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.846566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.846584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.846590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.858699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.858717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.858723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.869078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.869096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.869102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.880068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.880086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.880092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.890400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.890417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.890424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.900422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.900440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.900446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.910685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.910702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.910708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.921161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.921178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.921184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.931475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.931492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.931499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.940080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.940098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.940104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.946875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.946892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.946898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.956710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.956727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.956736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.967162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.967180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.967187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.977827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.977843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.977850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:16.990164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:16.990181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:16.990187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:17.002002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:17.002020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:17.002026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:17.012003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:17.012020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:17.012026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.402 [2024-10-11 12:02:17.023712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.402 [2024-10-11 12:02:17.023730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.402 [2024-10-11 12:02:17.023736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.663 [2024-10-11 12:02:17.033230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.663 [2024-10-11 12:02:17.033248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.663 [2024-10-11 12:02:17.033254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.663 [2024-10-11 12:02:17.044621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.663 [2024-10-11 12:02:17.044639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.663 [2024-10-11 12:02:17.044645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.663 [2024-10-11 12:02:17.056235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.663 [2024-10-11 12:02:17.056256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.663 [2024-10-11 12:02:17.056262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.663 [2024-10-11 12:02:17.067110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.663 [2024-10-11 12:02:17.067128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.663 [2024-10-11 12:02:17.067135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.663 [2024-10-11 12:02:17.078393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.663 [2024-10-11 12:02:17.078410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.663 [2024-10-11 12:02:17.078416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.663 [2024-10-11 12:02:17.090829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.663 [2024-10-11 12:02:17.090847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.663 [2024-10-11 12:02:17.090853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.102734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.102751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.102757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.115373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.115390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.115396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.127050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.127067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.127074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.139025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.139042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.139048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.150717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.150735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.150741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.161137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.161154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.161161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.172095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.172112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.172119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.184060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.184078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.184084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.196047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.196065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.196071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.208247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.208265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.208271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.217690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.217708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.217715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.227214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.227232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.227238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.237820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.237839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.237845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.248510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.248528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.248537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.257685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.257703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.257709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.269521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.269539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.269545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.281021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.281039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.281045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:32.664 [2024-10-11 12:02:17.290942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.664 [2024-10-11 12:02:17.290960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.664 [2024-10-11 12:02:17.290967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:32.925 2830.00 IOPS, 353.75 MiB/s [2024-10-11T10:02:17.557Z] [2024-10-11 12:02:17.300189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf97d40) 00:28:32.925 [2024-10-11 12:02:17.300206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:32.925 [2024-10-11 12:02:17.300212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:32.925 00:28:32.925 Latency(us) 00:28:32.925 [2024-10-11T10:02:17.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.925 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:32.925 nvme0n1 : 2.01 2829.69 353.71 0.00 0.00 5649.30 822.61 13598.72 00:28:32.925 [2024-10-11T10:02:17.557Z] =================================================================================================================== 00:28:32.925 [2024-10-11T10:02:17.557Z] Total : 2829.69 353.71 0.00 0.00 5649.30 822.61 13598.72 00:28:32.925 { 00:28:32.925 "results": [ 00:28:32.925 { 00:28:32.925 "job": "nvme0n1", 00:28:32.925 "core_mask": "0x2", 00:28:32.925 "workload": "randread", 00:28:32.925 "status": "finished", 00:28:32.925 "queue_depth": 16, 00:28:32.925 "io_size": 131072, 00:28:32.925 "runtime": 2.00587, 00:28:32.925 "iops": 2829.6948456280816, 00:28:32.925 "mibps": 353.7118557035102, 00:28:32.925 "io_failed": 0, 00:28:32.925 "io_timeout": 0, 00:28:32.925 "avg_latency_us": 5649.298792576932, 00:28:32.925 "min_latency_us": 822.6133333333333, 00:28:32.925 "max_latency_us": 13598.72 00:28:32.925 } 00:28:32.925 ], 00:28:32.925 "core_count": 1 00:28:32.925 } 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:32.925 | .driver_specific 00:28:32.925 | .nvme_error 00:28:32.925 | .status_code 00:28:32.925 | .command_transient_transport_error' 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 183 > 0 )) 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1188837 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1188837 ']' 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1188837 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.925 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1188837 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1188837' 00:28:33.186 killing process with pid 1188837 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1188837 00:28:33.186 Received shutdown signal, test time was about 2.000000 seconds 00:28:33.186 00:28:33.186 Latency(us) 00:28:33.186 [2024-10-11T10:02:17.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.186 [2024-10-11T10:02:17.818Z] =================================================================================================================== 00:28:33.186 [2024-10-11T10:02:17.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1188837 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1189387 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1189387 /var/tmp/bperf.sock 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1189387 ']' 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:33.186 12:02:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.186 [2024-10-11 12:02:17.715564] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:33.186 [2024-10-11 12:02:17.715622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189387 ] 00:28:33.186 [2024-10-11 12:02:17.792166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.447 [2024-10-11 12:02:17.820940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.019 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.019 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:34.019 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.019 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.279 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:34.279 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.279 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.279 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.279 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.279 12:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.539 nvme0n1 00:28:34.539 12:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:34.539 12:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.539 12:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.539 12:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.539 12:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.539 12:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.539 Running I/O for 2 seconds... 00:28:34.539 [2024-10-11 12:02:19.119900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e49b0 00:28:34.539 [2024-10-11 12:02:19.120884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.539 [2024-10-11 12:02:19.120912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:34.539 [2024-10-11 12:02:19.128657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f4f40 00:28:34.539 [2024-10-11 12:02:19.129620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.539 [2024-10-11 12:02:19.129638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.539 [2024-10-11 12:02:19.137222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e1710 00:28:34.539 [2024-10-11 12:02:19.138170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.540 [2024-10-11 12:02:19.138194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.540 [2024-10-11 12:02:19.145850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6fa8 00:28:34.540 [2024-10-11 12:02:19.146798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.540 [2024-10-11 12:02:19.146815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.540 [2024-10-11 12:02:19.154391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e9168 00:28:34.540 [2024-10-11 12:02:19.155340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.540 [2024-10-11 12:02:19.155356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.540 [2024-10-11 12:02:19.162922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eb328 00:28:34.540 [2024-10-11 12:02:19.163899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.540 [2024-10-11 12:02:19.163915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.171417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ed920 00:28:34.802 [2024-10-11 12:02:19.172346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.172362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.179928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166efae0 00:28:34.802 [2024-10-11 12:02:19.180884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.180900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.188424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f1ca0 00:28:34.802 [2024-10-11 12:02:19.189380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.189395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.196907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f3e60 00:28:34.802 [2024-10-11 12:02:19.197838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.197854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.205364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e27f0 00:28:34.802 [2024-10-11 12:02:19.206324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.206340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.213833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e0630 00:28:34.802 [2024-10-11 12:02:19.214761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.214778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.222305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e8088 00:28:34.802 [2024-10-11 12:02:19.223247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.223263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.230773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ea248 00:28:34.802 [2024-10-11 12:02:19.231715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.231731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.239233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ec408 00:28:34.802 [2024-10-11 12:02:19.240194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.240210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.247702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eea00 00:28:34.802 [2024-10-11 12:02:19.248608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.248624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.256152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f0bc0 00:28:34.802 [2024-10-11 12:02:19.257101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.257118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.264594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f2d80 00:28:34.802 [2024-10-11 12:02:19.265558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.265576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.273048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f4f40 00:28:34.802 [2024-10-11 12:02:19.274018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.274034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.281524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e1710 00:28:34.802 [2024-10-11 12:02:19.282456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.282473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.289992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6fa8 00:28:34.802 [2024-10-11 12:02:19.290935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.290951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.298481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e9168 00:28:34.802 [2024-10-11 12:02:19.299424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.299441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.306947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eb328 00:28:34.802 [2024-10-11 12:02:19.307890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.307906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.315416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ed920 00:28:34.802 [2024-10-11 12:02:19.316350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.316366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.323886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166efae0 00:28:34.802 [2024-10-11 12:02:19.324822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.324839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.332341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f1ca0 00:28:34.802 [2024-10-11 12:02:19.333285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.333302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.340789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f3e60 00:28:34.802 [2024-10-11 12:02:19.341688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.341704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.349249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e27f0 00:28:34.802 [2024-10-11 12:02:19.350160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.350175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.802 [2024-10-11 12:02:19.357705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e0630 00:28:34.802 [2024-10-11 12:02:19.358644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.802 [2024-10-11 12:02:19.358664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.803 [2024-10-11 12:02:19.366147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e8088 00:28:34.803 [2024-10-11 12:02:19.367092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.803 [2024-10-11 12:02:19.367109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.803 [2024-10-11 12:02:19.374616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ea248 00:28:34.803 [2024-10-11 12:02:19.375590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.803 [2024-10-11 12:02:19.375606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.803 [2024-10-11 12:02:19.383114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ec408 00:28:34.803 [2024-10-11 12:02:19.384062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.803 [2024-10-11 12:02:19.384078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.803 [2024-10-11 12:02:19.391573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eea00 00:28:34.803 [2024-10-11 12:02:19.392534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.803 [2024-10-11 12:02:19.392550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.803 [2024-10-11 12:02:19.400029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f0bc0 00:28:34.803 [2024-10-11 12:02:19.400973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.803 [2024-10-11 12:02:19.400990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.803 [2024-10-11 12:02:19.409522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f2d80 00:28:34.803 [2024-10-11 12:02:19.410914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.803 [2024-10-11 12:02:19.410930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:34.803 [2024-10-11 12:02:19.417055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ee190 00:28:34.803 [2024-10-11 12:02:19.417770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.803 [2024-10-11 12:02:19.417786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:34.803 [2024-10-11 12:02:19.425914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fda78 00:28:34.803 [2024-10-11 12:02:19.426965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:34.803 [2024-10-11 12:02:19.426981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.434318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fd208 00:28:35.065 [2024-10-11 12:02:19.435373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.435389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.442805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166df118 00:28:35.065 [2024-10-11 12:02:19.443803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.443819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.451237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fc560 00:28:35.065 [2024-10-11 12:02:19.452292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.452308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.459730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fb480 00:28:35.065 [2024-10-11 12:02:19.460769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.460785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.468200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fa3a0 00:28:35.065 [2024-10-11 12:02:19.469229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.469245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.476664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f92c0 00:28:35.065 [2024-10-11 12:02:19.477656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.477676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.485121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f81e0 00:28:35.065 [2024-10-11 12:02:19.486147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.486164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.493562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7100 00:28:35.065 [2024-10-11 12:02:19.494618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.494633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.502008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6020 00:28:35.065 [2024-10-11 12:02:19.502999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.503015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.510490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e3d08 00:28:35.065 [2024-10-11 12:02:19.511530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.511546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.520050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f0350 00:28:35.065 [2024-10-11 12:02:19.521556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.521571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.526112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fda78 00:28:35.065 [2024-10-11 12:02:19.526756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.526772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.534726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fd208 00:28:35.065 [2024-10-11 12:02:19.535419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.535434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.543183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f4298 00:28:35.065 [2024-10-11 12:02:19.543882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.543898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.551630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f31b8 00:28:35.065 [2024-10-11 12:02:19.552339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.552355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.560105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f20d8 00:28:35.065 [2024-10-11 12:02:19.560777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.560793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.568591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f0ff8 00:28:35.065 [2024-10-11 12:02:19.569287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.569303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.577081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f1430 00:28:35.065 [2024-10-11 12:02:19.577736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.577755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.585520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f2510 00:28:35.065 [2024-10-11 12:02:19.586237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.586253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.593960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f35f0 00:28:35.065 [2024-10-11 12:02:19.594656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.594676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.602590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fda78 00:28:35.065 [2024-10-11 12:02:19.603289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.065 [2024-10-11 12:02:19.603304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:35.065 [2024-10-11 12:02:19.611365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e4578 00:28:35.066 [2024-10-11 12:02:19.612063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.612079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:35.066 [2024-10-11 12:02:19.620694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6cc8 00:28:35.066 [2024-10-11 12:02:19.621731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.621747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:35.066 [2024-10-11 12:02:19.628572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e1f80 00:28:35.066 [2024-10-11 12:02:19.629287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.629303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.066 [2024-10-11 12:02:19.636923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e3060 00:28:35.066 [2024-10-11 12:02:19.637626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.637643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.066 [2024-10-11 12:02:19.645378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f46d0 00:28:35.066 [2024-10-11 12:02:19.646088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.646104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.066 [2024-10-11 12:02:19.653861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f35f0 00:28:35.066 [2024-10-11 12:02:19.654554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.654574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.066 [2024-10-11 12:02:19.662317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f2510 00:28:35.066 [2024-10-11 12:02:19.662986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.663002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.066 [2024-10-11 12:02:19.670782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f1430 00:28:35.066 [2024-10-11 12:02:19.671489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.671505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.066 [2024-10-11 12:02:19.679246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e38d0 00:28:35.066 [2024-10-11 12:02:19.679905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.679922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.066 [2024-10-11 12:02:19.687684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6458 00:28:35.066 [2024-10-11 12:02:19.688375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.066 [2024-10-11 12:02:19.688391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.328 [2024-10-11 12:02:19.696159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7538 00:28:35.328 [2024-10-11 12:02:19.696863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.328 [2024-10-11 12:02:19.696880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.328 [2024-10-11 12:02:19.704621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f8618 00:28:35.328 [2024-10-11 12:02:19.705335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.328 [2024-10-11 12:02:19.705351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.328 [2024-10-11 12:02:19.713077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f96f8 00:28:35.328 [2024-10-11 12:02:19.713729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.328 [2024-10-11 12:02:19.713745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.328 [2024-10-11 12:02:19.721546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fa7d8 00:28:35.328 [2024-10-11 12:02:19.722254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.328 [2024-10-11 12:02:19.722271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.328 [2024-10-11 12:02:19.729997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e1710 00:28:35.328 [2024-10-11 12:02:19.730705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.328 [2024-10-11 12:02:19.730721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.328 [2024-10-11 12:02:19.738462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e99d8 00:28:35.328 [2024-10-11 12:02:19.739158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.328 [2024-10-11 12:02:19.739174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.328 [2024-10-11 12:02:19.746941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e95a0 00:28:35.328 [2024-10-11 12:02:19.747604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.328 [2024-10-11 12:02:19.747620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.328 [2024-10-11 12:02:19.755411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f3a28 00:28:35.328 [2024-10-11 12:02:19.756067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.328 [2024-10-11 12:02:19.756083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.328 [2024-10-11 12:02:19.763865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f2948 00:28:35.329 [2024-10-11 12:02:19.764551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.764566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.772314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f1868 00:28:35.329 [2024-10-11 12:02:19.772972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.772987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.780753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f0788 00:28:35.329 [2024-10-11 12:02:19.781458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.781474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.789213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ef6a8 00:28:35.329 [2024-10-11 12:02:19.789900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.789916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.797673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ee5c8 00:28:35.329 [2024-10-11 12:02:19.798362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.798378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.806124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ed4e8 00:28:35.329 [2024-10-11 12:02:19.806814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.806830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.814578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f57b0 00:28:35.329 [2024-10-11 12:02:19.815276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.815293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.823034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e3d08 00:28:35.329 [2024-10-11 12:02:19.823720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.823737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.831492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6020 00:28:35.329 [2024-10-11 12:02:19.832214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.832230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.839963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7100 00:28:35.329 [2024-10-11 12:02:19.840653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.840672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.848417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f81e0 00:28:35.329 [2024-10-11 12:02:19.849129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.849144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.856892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f92c0 00:28:35.329 [2024-10-11 12:02:19.857588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.857604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.865330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fa3a0 00:28:35.329 [2024-10-11 12:02:19.866037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.866053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.873786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e1b48 00:28:35.329 [2024-10-11 12:02:19.874460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.874478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.882256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e0a68 00:28:35.329 [2024-10-11 12:02:19.882974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.882990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.890726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166dfdc0 00:28:35.329 [2024-10-11 12:02:19.891417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.891433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.899184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e0ea0 00:28:35.329 [2024-10-11 12:02:19.899874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.899889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.907629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e1f80 00:28:35.329 [2024-10-11 12:02:19.908326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.908341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.916081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e3060 00:28:35.329 [2024-10-11 12:02:19.916785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.916801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.924567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f46d0 00:28:35.329 [2024-10-11 12:02:19.925266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.925281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.933035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f35f0 00:28:35.329 [2024-10-11 12:02:19.933726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.933743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.941496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f2510 00:28:35.329 [2024-10-11 12:02:19.942208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.942224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.949965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f1430 00:28:35.329 [2024-10-11 12:02:19.950669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.950685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.329 [2024-10-11 12:02:19.958432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e38d0 00:28:35.329 [2024-10-11 12:02:19.959089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.329 [2024-10-11 12:02:19.959106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:19.966874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6458 00:28:35.591 [2024-10-11 12:02:19.967568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:19.967583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:19.975340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7538 00:28:35.591 [2024-10-11 12:02:19.976009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:19.976025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:19.983808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f8618 00:28:35.591 [2024-10-11 12:02:19.984496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:19.984511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:19.992258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f96f8 00:28:35.591 [2024-10-11 12:02:19.992944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:19.992960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.000699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fa7d8 00:28:35.591 [2024-10-11 12:02:20.001366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.001382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.009661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e1710 00:28:35.591 [2024-10-11 12:02:20.010439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.010464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.018264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e0630 00:28:35.591 [2024-10-11 12:02:20.018943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.018964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.026749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f31b8 00:28:35.591 [2024-10-11 12:02:20.027395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.027411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.035199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f0ff8 00:28:35.591 [2024-10-11 12:02:20.035909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.035925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.043705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eee38 00:28:35.591 [2024-10-11 12:02:20.044400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.044417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.052156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ebfd0 00:28:35.591 [2024-10-11 12:02:20.052862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.052878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.060638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e3498 00:28:35.591 [2024-10-11 12:02:20.061331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.061347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.069136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7970 00:28:35.591 [2024-10-11 12:02:20.069827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.069844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.077677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166df550 00:28:35.591 [2024-10-11 12:02:20.078361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.591 [2024-10-11 12:02:20.078378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.591 [2024-10-11 12:02:20.086130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e1710 00:28:35.591 [2024-10-11 12:02:20.086823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.086839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.094597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e95a0 00:28:35.592 [2024-10-11 12:02:20.095277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.095298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.103039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f2948 00:28:35.592 [2024-10-11 12:02:20.103720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.103736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 29840.00 IOPS, 116.56 MiB/s [2024-10-11T10:02:20.224Z] [2024-10-11 12:02:20.111518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eee38 00:28:35.592 [2024-10-11 12:02:20.112190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.112206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.119983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166de470 00:28:35.592 [2024-10-11 12:02:20.120613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.120629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.128486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7970 00:28:35.592 [2024-10-11 12:02:20.129154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.129171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.137052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fa7d8 00:28:35.592 [2024-10-11 12:02:20.137734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.137750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.145522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e95a0 00:28:35.592 [2024-10-11 12:02:20.146192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.146207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.154101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f0bc0 00:28:35.592 [2024-10-11 12:02:20.154786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.154802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.162571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ed920 00:28:35.592 [2024-10-11 12:02:20.163250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.163265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.171047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6cc8 00:28:35.592 [2024-10-11 12:02:20.171730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.171745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.179502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f9b30 00:28:35.592 [2024-10-11 12:02:20.180133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.180148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.187946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ef270 00:28:35.592 [2024-10-11 12:02:20.188624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.188640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.196396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f20d8 00:28:35.592 [2024-10-11 12:02:20.197061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.197077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.204866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eee38 00:28:35.592 [2024-10-11 12:02:20.205532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.205547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.213320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166de470 00:28:35.592 [2024-10-11 12:02:20.213981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.592 [2024-10-11 12:02:20.213996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.592 [2024-10-11 12:02:20.221794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7970 00:28:35.854 [2024-10-11 12:02:20.222469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.222485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.230245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fa7d8 00:28:35.854 [2024-10-11 12:02:20.230928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.230944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.238685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e95a0 00:28:35.854 [2024-10-11 12:02:20.239348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.239364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.247114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f0bc0 00:28:35.854 [2024-10-11 12:02:20.247781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.247797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.255594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ed920 00:28:35.854 [2024-10-11 12:02:20.256265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.256280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.264053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6cc8 00:28:35.854 [2024-10-11 12:02:20.264724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.264740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.272497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f9b30 00:28:35.854 [2024-10-11 12:02:20.273161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.273177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.280952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ef270 00:28:35.854 [2024-10-11 12:02:20.281627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.281642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.289371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f20d8 00:28:35.854 [2024-10-11 12:02:20.290014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.290029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.297838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eee38 00:28:35.854 [2024-10-11 12:02:20.298503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.298518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.306294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166de470 00:28:35.854 [2024-10-11 12:02:20.306971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.306988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.314764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7970 00:28:35.854 [2024-10-11 12:02:20.315434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.315453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.323590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fc998 00:28:35.854 [2024-10-11 12:02:20.324096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.324112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.332184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fef90 00:28:35.854 [2024-10-11 12:02:20.333000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.333016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.340674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e49b0 00:28:35.854 [2024-10-11 12:02:20.341476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.341492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.349141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e73e0 00:28:35.854 [2024-10-11 12:02:20.349957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.349972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.357611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e84c0 00:28:35.854 [2024-10-11 12:02:20.358394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.358410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.366061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f4b08 00:28:35.854 [2024-10-11 12:02:20.366853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.366868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.374493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6020 00:28:35.854 [2024-10-11 12:02:20.375284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.375299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.382934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ef6a8 00:28:35.854 [2024-10-11 12:02:20.383751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.383766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.391388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f81e0 00:28:35.854 [2024-10-11 12:02:20.392196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.392212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.399839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f9f68 00:28:35.854 [2024-10-11 12:02:20.400613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.400629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.408280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f57b0 00:28:35.854 [2024-10-11 12:02:20.409101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.409117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.416717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ec840 00:28:35.854 [2024-10-11 12:02:20.417479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.417494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.425160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eb328 00:28:35.854 [2024-10-11 12:02:20.425975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.425991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.433617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ea248 00:28:35.854 [2024-10-11 12:02:20.434436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.854 [2024-10-11 12:02:20.434452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.854 [2024-10-11 12:02:20.442075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e5220 00:28:35.855 [2024-10-11 12:02:20.442897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.855 [2024-10-11 12:02:20.442913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.855 [2024-10-11 12:02:20.450527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e7818 00:28:35.855 [2024-10-11 12:02:20.451331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.855 [2024-10-11 12:02:20.451347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.855 [2024-10-11 12:02:20.459052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fd208 00:28:35.855 [2024-10-11 12:02:20.459876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.855 [2024-10-11 12:02:20.459892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.855 [2024-10-11 12:02:20.467484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fda78 00:28:35.855 [2024-10-11 12:02:20.468316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.855 [2024-10-11 12:02:20.468331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.855 [2024-10-11 12:02:20.475949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ff3c8 00:28:35.855 [2024-10-11 12:02:20.476728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.855 [2024-10-11 12:02:20.476744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:35.855 [2024-10-11 12:02:20.484411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eaab8 00:28:36.116 [2024-10-11 12:02:20.485224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.116 [2024-10-11 12:02:20.485239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:36.116 [2024-10-11 12:02:20.492875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ebb98 00:28:36.116 [2024-10-11 12:02:20.493638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.116 [2024-10-11 12:02:20.493654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:36.116 [2024-10-11 12:02:20.501324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ee190 00:28:36.116 [2024-10-11 12:02:20.502127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.116 [2024-10-11 12:02:20.502142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:36.116 [2024-10-11 12:02:20.509760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166efae0 00:28:36.116 [2024-10-11 12:02:20.510572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.116 [2024-10-11 12:02:20.510587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:36.116 [2024-10-11 12:02:20.518181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f8e88 00:28:36.116 [2024-10-11 12:02:20.518987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.116 [2024-10-11 12:02:20.519002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:36.116 [2024-10-11 12:02:20.526640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f9b30 00:28:36.116 [2024-10-11 12:02:20.527451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.116 [2024-10-11 12:02:20.527467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:36.116 [2024-10-11 12:02:20.535356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fb048 00:28:36.116 [2024-10-11 12:02:20.535963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.116 [2024-10-11 12:02:20.535982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:36.116 [2024-10-11 12:02:20.543961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6738 00:28:36.116 [2024-10-11 12:02:20.544875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.116 [2024-10-11 12:02:20.544891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.552422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e4578 00:28:36.117 [2024-10-11 12:02:20.553320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.553336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.560859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6fa8 00:28:36.117 [2024-10-11 12:02:20.561777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.561793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.569297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e8088 00:28:36.117 [2024-10-11 12:02:20.570240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.570256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.577180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ec408 00:28:36.117 [2024-10-11 12:02:20.578098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.578114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.586465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f3a28 00:28:36.117 [2024-10-11 12:02:20.587462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.587478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.595054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e1f80 00:28:36.117 [2024-10-11 12:02:20.596055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.596072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.603488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f0788 00:28:36.117 [2024-10-11 12:02:20.604529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.604546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.611314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166edd58 00:28:36.117 [2024-10-11 12:02:20.612653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.612674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.619159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e9168 00:28:36.117 [2024-10-11 12:02:20.619857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.627621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ef6a8 00:28:36.117 [2024-10-11 12:02:20.628301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.628317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.636234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e8d30 00:28:36.117 [2024-10-11 12:02:20.636945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.636960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.644688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e7c50 00:28:36.117 [2024-10-11 12:02:20.645379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.645395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.653137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e4578 00:28:36.117 [2024-10-11 12:02:20.653858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.653874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.661577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6738 00:28:36.117 [2024-10-11 12:02:20.662278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.662294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.670043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fc128 00:28:36.117 [2024-10-11 12:02:20.670729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.670745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.678504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6458 00:28:36.117 [2024-10-11 12:02:20.679195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.679212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.686954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ec408 00:28:36.117 [2024-10-11 12:02:20.687644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.687660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.695377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f9b30 00:28:36.117 [2024-10-11 12:02:20.696078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.696094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.703810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166de8a8 00:28:36.117 [2024-10-11 12:02:20.704501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.704517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.712264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e0630 00:28:36.117 [2024-10-11 12:02:20.712937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.712954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.720728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fd640 00:28:36.117 [2024-10-11 12:02:20.721416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.721432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.729177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f5378 00:28:36.117 [2024-10-11 12:02:20.729868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.729884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.737616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e4de8 00:28:36.117 [2024-10-11 12:02:20.738308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.738324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.117 [2024-10-11 12:02:20.746053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f20d8 00:28:36.117 [2024-10-11 12:02:20.746729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.117 [2024-10-11 12:02:20.746745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.379 [2024-10-11 12:02:20.754528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f2d80 00:28:36.379 [2024-10-11 12:02:20.755223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.379 [2024-10-11 12:02:20.755242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.379 [2024-10-11 12:02:20.762996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7538 00:28:36.379 [2024-10-11 12:02:20.763703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.379 [2024-10-11 12:02:20.763719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.379 [2024-10-11 12:02:20.771456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e9168 00:28:36.379 [2024-10-11 12:02:20.772143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.379 [2024-10-11 12:02:20.772159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.379 [2024-10-11 12:02:20.779904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e8088 00:28:36.379 [2024-10-11 12:02:20.780595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.379 [2024-10-11 12:02:20.780611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.379 [2024-10-11 12:02:20.788339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6b70 00:28:36.379 [2024-10-11 12:02:20.789043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.379 [2024-10-11 12:02:20.789059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.796769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6300 00:28:36.380 [2024-10-11 12:02:20.797474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.797490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.805223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fb480 00:28:36.380 [2024-10-11 12:02:20.805917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.805933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.813704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fc560 00:28:36.380 [2024-10-11 12:02:20.814374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.814389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.822158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f31b8 00:28:36.380 [2024-10-11 12:02:20.822874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.822890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.830602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f81e0 00:28:36.380 [2024-10-11 12:02:20.831291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.831307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.839041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ef6a8 00:28:36.380 [2024-10-11 12:02:20.839734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.839750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.847491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7da8 00:28:36.380 [2024-10-11 12:02:20.848181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.848197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.855957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166feb58 00:28:36.380 [2024-10-11 12:02:20.856672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.856688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.864406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166df988 00:28:36.380 [2024-10-11 12:02:20.865057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.865074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.872876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e27f0 00:28:36.380 [2024-10-11 12:02:20.873582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.873597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.881339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e88f8 00:28:36.380 [2024-10-11 12:02:20.882050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.882066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.889791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7100 00:28:36.380 [2024-10-11 12:02:20.890484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.890500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.898251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fe2e8 00:28:36.380 [2024-10-11 12:02:20.898954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.898970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.906736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166eff18 00:28:36.380 [2024-10-11 12:02:20.907440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.907456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.915196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e8d30 00:28:36.380 [2024-10-11 12:02:20.915878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.915894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.923675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e7c50 00:28:36.380 [2024-10-11 12:02:20.924377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.924394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.932117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e4578 00:28:36.380 [2024-10-11 12:02:20.932814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.932830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.940586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6738 00:28:36.380 [2024-10-11 12:02:20.941244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.941260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.949045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fc128 00:28:36.380 [2024-10-11 12:02:20.949718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.949734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.957507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f6458 00:28:36.380 [2024-10-11 12:02:20.958207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.958223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.965980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166ec408 00:28:36.380 [2024-10-11 12:02:20.966676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.966692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.974457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f9b30 00:28:36.380 [2024-10-11 12:02:20.975151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.975169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.982920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166de8a8 00:28:36.380 [2024-10-11 12:02:20.983611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.983628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.991394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e0630 00:28:36.380 [2024-10-11 12:02:20.992103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:20.992118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:20.999878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fd640 00:28:36.380 [2024-10-11 12:02:21.000570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:21.000587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.380 [2024-10-11 12:02:21.008327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f5378 00:28:36.380 [2024-10-11 12:02:21.008981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.380 [2024-10-11 12:02:21.008997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.642 [2024-10-11 12:02:21.016781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e4de8 00:28:36.642 [2024-10-11 12:02:21.017475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.642 [2024-10-11 12:02:21.017491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.642 [2024-10-11 12:02:21.025237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f20d8 00:28:36.642 [2024-10-11 12:02:21.025933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.642 [2024-10-11 12:02:21.025949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.642 [2024-10-11 12:02:21.033806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f2d80 00:28:36.642 [2024-10-11 12:02:21.034501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.642 [2024-10-11 12:02:21.034517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 [2024-10-11 12:02:21.042273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f7538 00:28:36.643 [2024-10-11 12:02:21.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.643 [2024-10-11 12:02:21.042949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 [2024-10-11 12:02:21.050722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e9168 00:28:36.643 [2024-10-11 12:02:21.051406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.643 [2024-10-11 12:02:21.051422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 [2024-10-11 12:02:21.059181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e8088 00:28:36.643 [2024-10-11 12:02:21.059863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.643 [2024-10-11 12:02:21.059879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 [2024-10-11 12:02:21.067614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6b70 00:28:36.643 [2024-10-11 12:02:21.068320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.643 [2024-10-11 12:02:21.068336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 [2024-10-11 12:02:21.076089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166e6300 00:28:36.643 [2024-10-11 12:02:21.076794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.643 [2024-10-11 12:02:21.076810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 [2024-10-11 12:02:21.084547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fb480 00:28:36.643 [2024-10-11 12:02:21.085242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.643 [2024-10-11 12:02:21.085258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 [2024-10-11 12:02:21.093005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166fc560 00:28:36.643 [2024-10-11 12:02:21.093694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.643 [2024-10-11 12:02:21.093710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 [2024-10-11 12:02:21.101464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f31b8 00:28:36.643 [2024-10-11 12:02:21.102165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.643 [2024-10-11 12:02:21.102180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 30029.00 IOPS, 117.30 MiB/s [2024-10-11T10:02:21.275Z] [2024-10-11 12:02:21.109893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6d890) with pdu=0x2000166f81e0 00:28:36.643 [2024-10-11 12:02:21.110507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:36.643 [2024-10-11 12:02:21.110522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:36.643 00:28:36.643 Latency(us) 00:28:36.643 [2024-10-11T10:02:21.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.643 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.643 nvme0n1 : 2.01 30044.34 117.36 0.00 0.00 4255.05 2280.11 13871.79 00:28:36.643 [2024-10-11T10:02:21.275Z] =================================================================================================================== 00:28:36.643 [2024-10-11T10:02:21.275Z] Total : 30044.34 117.36 0.00 0.00 4255.05 2280.11 13871.79 00:28:36.643 { 00:28:36.643 "results": [ 00:28:36.643 { 00:28:36.643 "job": "nvme0n1", 00:28:36.643 "core_mask": "0x2", 00:28:36.643 "workload": "randwrite", 00:28:36.643 "status": "finished", 00:28:36.643 "queue_depth": 128, 00:28:36.643 "io_size": 4096, 00:28:36.643 "runtime": 2.005336, 00:28:36.643 "iops": 30044.341696354128, 00:28:36.643 "mibps": 117.36070975138331, 00:28:36.643 "io_failed": 0, 00:28:36.643 "io_timeout": 0, 00:28:36.643 "avg_latency_us": 4255.046750651463, 00:28:36.643 "min_latency_us": 2280.1066666666666, 00:28:36.643 "max_latency_us": 13871.786666666667 00:28:36.643 } 00:28:36.643 ], 00:28:36.643 "core_count": 1 00:28:36.643 } 00:28:36.643 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.643 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.643 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.643 | .driver_specific 00:28:36.643 | .nvme_error 00:28:36.643 | .status_code 00:28:36.643 | .command_transient_transport_error' 00:28:36.643 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1189387 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1189387 ']' 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1189387 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1189387 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1189387' 00:28:36.904 killing process with pid 1189387 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1189387 00:28:36.904 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.904 00:28:36.904 Latency(us) 00:28:36.904 [2024-10-11T10:02:21.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.904 [2024-10-11T10:02:21.536Z] =================================================================================================================== 00:28:36.904 [2024-10-11T10:02:21.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1189387 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1190219 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1190219 /var/tmp/bperf.sock 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1190219 ']' 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:36.904 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.905 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:36.905 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.905 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:36.905 12:02:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.905 [2024-10-11 12:02:21.535131] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:36.905 [2024-10-11 12:02:21.535185] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190219 ] 00:28:36.905 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.905 Zero copy mechanism will not be used. 00:28:37.165 [2024-10-11 12:02:21.613093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.165 [2024-10-11 12:02:21.641840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.737 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.737 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:37.737 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.737 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:37.998 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:37.998 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.998 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.998 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.998 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.998 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.259 nvme0n1 00:28:38.259 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:38.259 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.259 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.259 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.259 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:38.259 12:02:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.521 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.521 Zero copy mechanism will not be used. 00:28:38.521 Running I/O for 2 seconds... 00:28:38.521 [2024-10-11 12:02:22.950110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:22.950316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:22.950345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:22.959151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:22.959579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:22.959602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:22.970050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:22.970232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:22.970249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:22.979513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:22.979811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:22.979830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:22.987656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:22.987986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:22.988004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:22.995305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:22.995496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:22.995513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:23.002779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:23.002973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:23.002991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:23.008647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:23.008843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:23.008861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:23.019105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:23.019414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:23.019439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:23.029801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:23.030100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:23.030121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:23.041247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:23.041451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:23.041467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:23.051630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:23.051946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:23.051963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:23.060573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:23.060980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:23.060999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:23.067551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.521 [2024-10-11 12:02:23.067746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.521 [2024-10-11 12:02:23.067764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.521 [2024-10-11 12:02:23.075494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.075921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.075939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.084750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.085078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.085095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.092624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.092833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.092849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.097866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.098151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.098167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.105590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.106008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.106026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.111194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.111407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.111424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.117441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.117640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.117657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.123154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.123406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.123422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.131910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.132149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.132166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.139875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.140065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.140082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.522 [2024-10-11 12:02:23.147166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.522 [2024-10-11 12:02:23.147479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.522 [2024-10-11 12:02:23.147495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.154357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.154549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.154569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.162933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.163363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.163381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.170054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.170370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.170387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.175628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.175825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.175842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.181244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.181565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.181583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.186825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.186926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.186941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.195869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.196160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.196178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.205647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.205984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.206002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.214665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.214974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.214992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.223696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.224004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.224021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.231974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.232281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.232299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.240299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.240616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.240633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.249748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.250074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.250092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.258990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.259324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.259342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.264856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.265047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.265063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.269348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.269536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.269553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.277478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.277676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.277693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.286991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.287182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.287198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.290980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.291168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.291184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.295640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.295844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.295861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.300665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.301092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.301110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.307293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.307587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.307604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.784 [2024-10-11 12:02:23.314755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.784 [2024-10-11 12:02:23.314947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.784 [2024-10-11 12:02:23.314965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.323330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.323663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.323686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.330132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.330502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.330519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.337913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.338206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.338223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.345139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.345326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.345349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.354656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.354918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.354936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.365092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.365315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.365332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.375800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.376050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.376067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.386154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.386458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.386476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.396583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.396785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.396802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.785 [2024-10-11 12:02:23.406497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:38.785 [2024-10-11 12:02:23.406799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.785 [2024-10-11 12:02:23.406817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.047 [2024-10-11 12:02:23.417340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.047 [2024-10-11 12:02:23.417533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.047 [2024-10-11 12:02:23.417550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.047 [2024-10-11 12:02:23.427686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.047 [2024-10-11 12:02:23.427922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.047 [2024-10-11 12:02:23.427939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.047 [2024-10-11 12:02:23.438077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.047 [2024-10-11 12:02:23.438378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.047 [2024-10-11 12:02:23.438396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.047 [2024-10-11 12:02:23.448666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.047 [2024-10-11 12:02:23.448966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.047 [2024-10-11 12:02:23.448983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.047 [2024-10-11 12:02:23.458559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.047 [2024-10-11 12:02:23.458806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.458821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.469745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.470019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.470036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.480682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.480978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.480994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.490736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.491013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.491030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.501284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.501362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.501377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.511420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.511689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.511705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.520575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.520883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.520899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.530363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.530470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.530486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.540509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.540608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.540624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.548633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.548698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.548713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.557557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.557654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.557675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.566224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.566277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.566292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.572799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.572853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.572868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.579746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.580016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.580032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.587284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.587365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.587380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.591940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.592015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.592033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.597781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.597861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.597876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.604132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.604206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.604222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.608802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.608948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.608966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.614514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.614572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.614588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.619001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.619056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.619071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.622956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.623017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.623033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.626258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.626315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.626330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.629486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.629533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.629548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.633129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.633177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.633192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.636336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.636380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.636395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.639358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.639400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.639416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.643141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.643184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.643199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.648495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.048 [2024-10-11 12:02:23.648752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.048 [2024-10-11 12:02:23.648770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.048 [2024-10-11 12:02:23.655885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.049 [2024-10-11 12:02:23.656152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.049 [2024-10-11 12:02:23.656167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.049 [2024-10-11 12:02:23.662062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.049 [2024-10-11 12:02:23.662152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.049 [2024-10-11 12:02:23.662168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.049 [2024-10-11 12:02:23.669146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.049 [2024-10-11 12:02:23.669206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.049 [2024-10-11 12:02:23.669221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.049 [2024-10-11 12:02:23.677458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.049 [2024-10-11 12:02:23.677522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.049 [2024-10-11 12:02:23.677539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.310 [2024-10-11 12:02:23.684918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.310 [2024-10-11 12:02:23.684968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.310 [2024-10-11 12:02:23.684983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.310 [2024-10-11 12:02:23.691851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.310 [2024-10-11 12:02:23.691916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.691931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.697279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.697324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.697341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.703581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.703637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.703652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.710326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.710400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.710415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.715881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.716127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.716142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.722177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.722234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.722250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.729989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.730052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.730070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.736363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.736582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.736597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.744653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.744927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.744944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.752735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.752814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.752829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.761267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.761326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.761340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.767083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.767143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.767158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.772844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.772905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.772920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.780644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.780696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.780711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.788447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.788496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.788511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.794137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.794200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.794215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.802386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.802494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.802510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.810406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.810634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.810649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.818103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.818171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.818186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.826035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.826098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.826113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.830308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.830378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.830392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.838644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.838865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.838881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.847914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.848147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.848162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.854903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.855128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.855143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.862469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.862521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.862538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.867538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.867795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.867810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.874423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.874652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.874672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.883371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.311 [2024-10-11 12:02:23.883503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.311 [2024-10-11 12:02:23.883518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.311 [2024-10-11 12:02:23.892031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.312 [2024-10-11 12:02:23.892083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-10-11 12:02:23.892102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.312 [2024-10-11 12:02:23.898492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.312 [2024-10-11 12:02:23.898562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-10-11 12:02:23.898577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.312 [2024-10-11 12:02:23.907364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.312 [2024-10-11 12:02:23.907606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-10-11 12:02:23.907630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.312 [2024-10-11 12:02:23.915194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.312 [2024-10-11 12:02:23.915248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-10-11 12:02:23.915263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.312 [2024-10-11 12:02:23.921937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.312 [2024-10-11 12:02:23.922003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-10-11 12:02:23.922019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.312 [2024-10-11 12:02:23.929257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.312 [2024-10-11 12:02:23.929324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-10-11 12:02:23.929339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.312 [2024-10-11 12:02:23.937246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.312 [2024-10-11 12:02:23.937427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.312 [2024-10-11 12:02:23.937443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:23.946387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:23.947045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:23.947062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.573 4089.00 IOPS, 511.12 MiB/s [2024-10-11T10:02:24.205Z] [2024-10-11 12:02:23.956906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:23.957138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:23.957154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:23.968190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:23.968423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:23.968440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:23.979835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:23.980105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:23.980120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:23.991115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:23.991390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:23.991405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.002234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.002455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.002471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.013216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.013515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.013530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.024945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.025217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.025232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.035277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.035353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.035369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.043490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.043565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.043581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.052384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.052682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.052702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.061039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.061091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.061106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.070111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.070165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.070181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.079292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.079352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.079367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.088448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.088718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.088737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.098877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.099128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.099147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.110225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.110464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.110480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.121335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.121550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.121567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.132447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.132703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.132719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.143782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.144031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.573 [2024-10-11 12:02:24.144047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.573 [2024-10-11 12:02:24.154689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.573 [2024-10-11 12:02:24.154961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.574 [2024-10-11 12:02:24.154978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.574 [2024-10-11 12:02:24.166665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.574 [2024-10-11 12:02:24.166931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.574 [2024-10-11 12:02:24.166946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.574 [2024-10-11 12:02:24.178371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.574 [2024-10-11 12:02:24.178463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.574 [2024-10-11 12:02:24.178478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.574 [2024-10-11 12:02:24.189775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.574 [2024-10-11 12:02:24.189961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.574 [2024-10-11 12:02:24.189976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.574 [2024-10-11 12:02:24.200679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.574 [2024-10-11 12:02:24.200979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.574 [2024-10-11 12:02:24.200995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.211313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.211598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.211614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.222720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.222971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.222985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.230710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.230768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.230783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.235773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.235841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.235856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.242916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.242971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.242986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.248612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.248688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.248703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.255962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.256270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.256286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.265053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.265111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.265129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.272618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.272713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.272728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.281680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.281918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.281933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.289291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.289553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.289568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.297995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.298039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.298055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.307033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.307090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.307108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.312302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.312357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.312372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.319664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.319929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.319945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.836 [2024-10-11 12:02:24.328410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.836 [2024-10-11 12:02:24.328466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.836 [2024-10-11 12:02:24.328482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.336230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.336299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.336314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.345127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.345177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.345192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.352682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.352745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.352760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.361376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.361436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.361451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.367829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.367879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.367894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.375000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.375180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.375196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.382904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.382960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.382976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.389048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.389122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.389137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.393205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.393265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.393283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.400251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.400309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.400324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.408898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.408951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.408966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.414686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.414811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.414826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.423517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.423694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.423709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.434366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.434618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.434633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.445789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.446036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.446052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.837 [2024-10-11 12:02:24.457219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:39.837 [2024-10-11 12:02:24.457549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.837 [2024-10-11 12:02:24.457564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.467380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.467471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.467487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.477287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.477551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.477569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.487992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.488251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.488267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.498876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.499137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.499152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.509441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.509657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.509678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.520044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.520314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.520329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.531538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.531648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.531663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.542248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.542516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.542531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.554142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.554410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.554426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.565367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.565509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.565525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.576697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.576995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.577010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.587834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.588057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.588109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.599142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.599354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.599370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.099 [2024-10-11 12:02:24.607875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.099 [2024-10-11 12:02:24.608068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.099 [2024-10-11 12:02:24.608083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.612421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.612474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.612491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.616165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.616223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.616238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.622641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.622852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.622866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.628170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.628236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.628251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.632158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.632204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.632219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.636023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.636077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.636092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.642802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.643062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.643077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.648996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.649059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.649074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.653499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.653553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.653572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.660792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.660848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.660863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.664585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.664635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.664650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.668411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.668455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.668470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.672286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.672351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.672367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.676065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.676119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.676138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.680140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.680187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.680202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.683445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.683490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.683505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.687301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.687357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.687372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.690935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.690979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.690994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.694401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.694446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.694461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.698022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.698075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.698090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.701373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.701440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.701455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.705389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.705432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.705450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.709426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.709483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.709498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.712936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.712987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.713001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.717013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.717087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.717102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.721762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.721820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.721835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.100 [2024-10-11 12:02:24.728391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.100 [2024-10-11 12:02:24.728443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.100 [2024-10-11 12:02:24.728458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.737460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.737514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.737529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.745775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.745988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.746003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.753905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.753949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.753964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.759126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.759230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.759248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.764416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.764463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.764478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.768140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.768201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.768216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.772001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.772061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.772076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.776367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.776415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.776430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.780502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.780573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.780588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.787295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.787343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.362 [2024-10-11 12:02:24.787358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.362 [2024-10-11 12:02:24.791217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.362 [2024-10-11 12:02:24.791268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.791283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.795265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.795323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.795338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.799356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.799417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.799432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.803114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.803163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.803178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.807079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.807138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.807155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.810940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.810987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.811003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.815001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.815061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.815077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.822534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.822795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.822817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.827228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.827295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.827310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.831315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.831380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.831396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.835331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.835407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.835422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.843816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.843867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.843882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.850121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.850165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.850180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.854336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.854386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.854402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.858464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.858509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.858525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.863062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.863122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.863139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.868388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.868453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.868467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.873445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.873509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.873524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.880116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.880354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.880368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.886850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.886963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.886981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.893531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.893802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.893818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.900789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.900867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.900883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.906185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.906274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.906289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.911431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.911512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.911527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.915543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.915621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.915636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.919324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.919442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.919457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.363 [2024-10-11 12:02:24.923660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.363 [2024-10-11 12:02:24.923754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.363 [2024-10-11 12:02:24.923769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.364 [2024-10-11 12:02:24.931297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.364 [2024-10-11 12:02:24.931347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.364 [2024-10-11 12:02:24.931363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.364 [2024-10-11 12:02:24.937589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.364 [2024-10-11 12:02:24.937851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.364 [2024-10-11 12:02:24.937868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.364 [2024-10-11 12:02:24.946479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c6dbd0) with pdu=0x2000166fef90 00:28:40.364 [2024-10-11 12:02:24.946764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.364 [2024-10-11 12:02:24.946781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.364 4172.00 IOPS, 521.50 MiB/s 00:28:40.364 Latency(us) 00:28:40.364 [2024-10-11T10:02:24.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.364 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:40.364 nvme0n1 : 2.01 4169.12 521.14 0.00 0.00 3830.77 1454.08 12288.00 00:28:40.364 [2024-10-11T10:02:24.996Z] =================================================================================================================== 00:28:40.364 [2024-10-11T10:02:24.996Z] Total : 4169.12 521.14 0.00 0.00 3830.77 1454.08 12288.00 00:28:40.364 { 00:28:40.364 "results": [ 00:28:40.364 { 00:28:40.364 "job": "nvme0n1", 00:28:40.364 "core_mask": "0x2", 00:28:40.364 "workload": "randwrite", 00:28:40.364 "status": "finished", 00:28:40.364 "queue_depth": 16, 00:28:40.364 "io_size": 131072, 00:28:40.364 "runtime": 2.005219, 00:28:40.364 "iops": 4169.120679586618, 00:28:40.364 "mibps": 521.1400849483273, 00:28:40.364 "io_failed": 0, 00:28:40.364 "io_timeout": 0, 00:28:40.364 "avg_latency_us": 3830.7709346092506, 00:28:40.364 "min_latency_us": 1454.08, 00:28:40.364 "max_latency_us": 12288.0 00:28:40.364 } 00:28:40.364 ], 00:28:40.364 "core_count": 1 00:28:40.364 } 00:28:40.364 12:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:40.364 12:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:40.364 12:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:40.364 | .driver_specific 00:28:40.364 | .nvme_error 00:28:40.364 | .status_code 00:28:40.364 | .command_transient_transport_error' 00:28:40.364 12:02:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 269 > 0 )) 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1190219 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1190219 ']' 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1190219 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1190219 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1190219' 00:28:40.625 killing process with pid 1190219 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1190219 00:28:40.625 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.625 00:28:40.625 Latency(us) 00:28:40.625 [2024-10-11T10:02:25.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.625 [2024-10-11T10:02:25.257Z] =================================================================================================================== 00:28:40.625 [2024-10-11T10:02:25.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.625 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1190219 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1187997 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1187997 ']' 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1187997 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1187997 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1187997' 00:28:40.886 killing process with pid 1187997 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1187997 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1187997 00:28:40.886 00:28:40.886 real 0m15.409s 00:28:40.886 user 0m30.393s 00:28:40.886 sys 0m3.454s 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:40.886 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.886 ************************************ 00:28:40.886 END TEST nvmf_digest_error 00:28:40.886 ************************************ 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.147 rmmod nvme_tcp 00:28:41.147 rmmod nvme_fabrics 00:28:41.147 rmmod nvme_keyring 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1187997 ']' 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1187997 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1187997 ']' 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1187997 00:28:41.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1187997) - No such process 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1187997 is not found' 00:28:41.147 Process with pid 1187997 is not found 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.147 12:02:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.147 00:28:43.147 real 0m41.490s 00:28:43.147 user 1m4.228s 00:28:43.147 sys 0m12.992s 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:43.147 ************************************ 00:28:43.147 END TEST nvmf_digest 00:28:43.147 ************************************ 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.147 ************************************ 00:28:43.147 START TEST nvmf_bdevperf 00:28:43.147 ************************************ 00:28:43.147 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:43.409 * Looking for test storage... 00:28:43.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:43.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.409 --rc genhtml_branch_coverage=1 00:28:43.409 --rc genhtml_function_coverage=1 00:28:43.409 --rc genhtml_legend=1 00:28:43.409 --rc geninfo_all_blocks=1 00:28:43.409 --rc geninfo_unexecuted_blocks=1 00:28:43.409 00:28:43.409 ' 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:43.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.409 --rc genhtml_branch_coverage=1 00:28:43.409 --rc genhtml_function_coverage=1 00:28:43.409 --rc genhtml_legend=1 00:28:43.409 --rc geninfo_all_blocks=1 00:28:43.409 --rc geninfo_unexecuted_blocks=1 00:28:43.409 00:28:43.409 ' 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:43.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.409 --rc genhtml_branch_coverage=1 00:28:43.409 --rc genhtml_function_coverage=1 00:28:43.409 --rc genhtml_legend=1 00:28:43.409 --rc geninfo_all_blocks=1 00:28:43.409 --rc geninfo_unexecuted_blocks=1 00:28:43.409 00:28:43.409 ' 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:43.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.409 --rc genhtml_branch_coverage=1 00:28:43.409 --rc genhtml_function_coverage=1 00:28:43.409 --rc genhtml_legend=1 00:28:43.409 --rc geninfo_all_blocks=1 00:28:43.409 --rc geninfo_unexecuted_blocks=1 00:28:43.409 00:28:43.409 ' 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.409 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:43.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.410 12:02:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:51.555 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:51.555 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:51.555 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:51.555 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:28:51.555 00:28:51.555 --- 10.0.0.2 ping statistics --- 00:28:51.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.555 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:28:51.555 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:28:51.556 00:28:51.556 --- 10.0.0.1 ping statistics --- 00:28:51.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.556 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1195091 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1195091 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1195091 ']' 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:51.556 12:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.556 [2024-10-11 12:02:35.593878] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:51.556 [2024-10-11 12:02:35.593946] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.556 [2024-10-11 12:02:35.685932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:51.556 [2024-10-11 12:02:35.739255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.556 [2024-10-11 12:02:35.739309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.556 [2024-10-11 12:02:35.739318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.556 [2024-10-11 12:02:35.739325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.556 [2024-10-11 12:02:35.739331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.556 [2024-10-11 12:02:35.741195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.556 [2024-10-11 12:02:35.741349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.556 [2024-10-11 12:02:35.741350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:51.817 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:51.817 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:51.817 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:51.817 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:51.817 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.078 [2024-10-11 12:02:36.464444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.078 Malloc0 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:52.078 [2024-10-11 12:02:36.535144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:52.078 { 00:28:52.078 "params": { 00:28:52.078 "name": "Nvme$subsystem", 00:28:52.078 "trtype": "$TEST_TRANSPORT", 00:28:52.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.078 "adrfam": "ipv4", 00:28:52.078 "trsvcid": "$NVMF_PORT", 00:28:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.078 "hdgst": ${hdgst:-false}, 00:28:52.078 "ddgst": ${ddgst:-false} 00:28:52.078 }, 00:28:52.078 "method": "bdev_nvme_attach_controller" 00:28:52.078 } 00:28:52.078 EOF 00:28:52.078 )") 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:52.078 12:02:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:52.078 "params": { 00:28:52.078 "name": "Nvme1", 00:28:52.078 "trtype": "tcp", 00:28:52.078 "traddr": "10.0.0.2", 00:28:52.078 "adrfam": "ipv4", 00:28:52.078 "trsvcid": "4420", 00:28:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:52.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:52.078 "hdgst": false, 00:28:52.078 "ddgst": false 00:28:52.078 }, 00:28:52.078 "method": "bdev_nvme_attach_controller" 00:28:52.078 }' 00:28:52.078 [2024-10-11 12:02:36.593820] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:52.078 [2024-10-11 12:02:36.593891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195432 ] 00:28:52.078 [2024-10-11 12:02:36.675660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.340 [2024-10-11 12:02:36.729662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.601 Running I/O for 1 seconds... 00:28:53.545 8612.00 IOPS, 33.64 MiB/s 00:28:53.545 Latency(us) 00:28:53.545 [2024-10-11T10:02:38.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.545 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:53.545 Verification LBA range: start 0x0 length 0x4000 00:28:53.545 Nvme1n1 : 1.01 8674.47 33.88 0.00 0.00 14666.00 1713.49 14417.92 00:28:53.545 [2024-10-11T10:02:38.177Z] =================================================================================================================== 00:28:53.545 [2024-10-11T10:02:38.177Z] Total : 8674.47 33.88 0.00 0.00 14666.00 1713.49 14417.92 00:28:53.545 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1195773 00:28:53.545 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:53.805 { 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme$subsystem", 00:28:53.805 "trtype": "$TEST_TRANSPORT", 00:28:53.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "$NVMF_PORT", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.805 "hdgst": ${hdgst:-false}, 00:28:53.805 "ddgst": ${ddgst:-false} 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 } 00:28:53.805 EOF 00:28:53.805 )") 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:53.805 12:02:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:53.805 "params": { 00:28:53.805 "name": "Nvme1", 00:28:53.805 "trtype": "tcp", 00:28:53.805 "traddr": "10.0.0.2", 00:28:53.805 "adrfam": "ipv4", 00:28:53.805 "trsvcid": "4420", 00:28:53.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:53.805 "hdgst": false, 00:28:53.805 "ddgst": false 00:28:53.805 }, 00:28:53.805 "method": "bdev_nvme_attach_controller" 00:28:53.805 }' 00:28:53.805 [2024-10-11 12:02:38.226046] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:53.805 [2024-10-11 12:02:38.226128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195773 ] 00:28:53.805 [2024-10-11 12:02:38.306292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.806 [2024-10-11 12:02:38.344660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.066 Running I/O for 15 seconds... 00:28:55.946 10424.00 IOPS, 40.72 MiB/s [2024-10-11T10:02:41.524Z] 10971.00 IOPS, 42.86 MiB/s [2024-10-11T10:02:41.524Z] 12:02:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1195091 00:28:56.892 12:02:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:56.892 [2024-10-11 12:02:41.189214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.892 [2024-10-11 12:02:41.189921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.892 [2024-10-11 12:02:41.189928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.189938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.189945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.189954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.189962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.189971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.189978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.189987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.189995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.893 [2024-10-11 12:02:41.190180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.893 [2024-10-11 12:02:41.190542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.893 [2024-10-11 12:02:41.190549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:56.894 [2024-10-11 12:02:41.190736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.190986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.190996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.894 [2024-10-11 12:02:41.191204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.894 [2024-10-11 12:02:41.191214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.895 [2024-10-11 12:02:41.191531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22720 is same with the state(6) to be set 00:28:56.895 [2024-10-11 12:02:41.191549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:56.895 [2024-10-11 12:02:41.191555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:56.895 [2024-10-11 12:02:41.191562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105520 len:8 PRP1 0x0 PRP2 0x0 00:28:56.895 [2024-10-11 12:02:41.191570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.895 [2024-10-11 12:02:41.191608] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb22720 was disconnected and freed. reset controller. 00:28:56.895 [2024-10-11 12:02:41.195174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.895 [2024-10-11 12:02:41.195223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.895 [2024-10-11 12:02:41.196026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.895 [2024-10-11 12:02:41.196064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.895 [2024-10-11 12:02:41.196075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.895 [2024-10-11 12:02:41.196318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.895 [2024-10-11 12:02:41.196542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.895 [2024-10-11 12:02:41.196550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.895 [2024-10-11 12:02:41.196559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.895 [2024-10-11 12:02:41.200134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.895 [2024-10-11 12:02:41.209358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.895 [2024-10-11 12:02:41.210047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.895 [2024-10-11 12:02:41.210086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.895 [2024-10-11 12:02:41.210098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.895 [2024-10-11 12:02:41.210338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.895 [2024-10-11 12:02:41.210562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.895 [2024-10-11 12:02:41.210571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.895 [2024-10-11 12:02:41.210579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.895 [2024-10-11 12:02:41.214140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.895 [2024-10-11 12:02:41.223343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.895 [2024-10-11 12:02:41.223920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.895 [2024-10-11 12:02:41.223963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.895 [2024-10-11 12:02:41.223975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.895 [2024-10-11 12:02:41.224215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.895 [2024-10-11 12:02:41.224438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.895 [2024-10-11 12:02:41.224448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.895 [2024-10-11 12:02:41.224456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.895 [2024-10-11 12:02:41.228017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.895 [2024-10-11 12:02:41.237231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.895 [2024-10-11 12:02:41.237905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.895 [2024-10-11 12:02:41.237944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.895 [2024-10-11 12:02:41.237955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.895 [2024-10-11 12:02:41.238195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.895 [2024-10-11 12:02:41.238419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.895 [2024-10-11 12:02:41.238428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.895 [2024-10-11 12:02:41.238435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.896 [2024-10-11 12:02:41.241995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.896 [2024-10-11 12:02:41.251203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.896 [2024-10-11 12:02:41.251901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.896 [2024-10-11 12:02:41.251941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.896 [2024-10-11 12:02:41.251952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.896 [2024-10-11 12:02:41.252193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.896 [2024-10-11 12:02:41.252416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.896 [2024-10-11 12:02:41.252426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.896 [2024-10-11 12:02:41.252434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.896 [2024-10-11 12:02:41.255997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.896 [2024-10-11 12:02:41.265005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.896 [2024-10-11 12:02:41.265589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.896 [2024-10-11 12:02:41.265609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.896 [2024-10-11 12:02:41.265617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.896 [2024-10-11 12:02:41.265843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.896 [2024-10-11 12:02:41.266068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.896 [2024-10-11 12:02:41.266077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.896 [2024-10-11 12:02:41.266084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.896 [2024-10-11 12:02:41.269630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.896 [2024-10-11 12:02:41.278835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.896 [2024-10-11 12:02:41.279400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.896 [2024-10-11 12:02:41.279417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.896 [2024-10-11 12:02:41.279425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.896 [2024-10-11 12:02:41.279644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.896 [2024-10-11 12:02:41.279868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.896 [2024-10-11 12:02:41.279878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.896 [2024-10-11 12:02:41.279885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.896 [2024-10-11 12:02:41.283433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.896 [2024-10-11 12:02:41.292631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.896 [2024-10-11 12:02:41.293259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.896 [2024-10-11 12:02:41.293302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.896 [2024-10-11 12:02:41.293313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.896 [2024-10-11 12:02:41.293556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.896 [2024-10-11 12:02:41.293788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.896 [2024-10-11 12:02:41.293798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.896 [2024-10-11 12:02:41.293805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.896 [2024-10-11 12:02:41.297362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.896 [2024-10-11 12:02:41.306591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.896 [2024-10-11 12:02:41.307197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.896 [2024-10-11 12:02:41.307219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.896 [2024-10-11 12:02:41.307228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.896 [2024-10-11 12:02:41.307449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.896 [2024-10-11 12:02:41.307682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.896 [2024-10-11 12:02:41.307692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.896 [2024-10-11 12:02:41.307699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.896 [2024-10-11 12:02:41.311250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.896 [2024-10-11 12:02:41.320466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.896 [2024-10-11 12:02:41.321025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.896 [2024-10-11 12:02:41.321044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.896 [2024-10-11 12:02:41.321052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.896 [2024-10-11 12:02:41.321273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.896 [2024-10-11 12:02:41.321492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.896 [2024-10-11 12:02:41.321501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.896 [2024-10-11 12:02:41.321508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.896 [2024-10-11 12:02:41.325065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.896 [2024-10-11 12:02:41.334286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.896 [2024-10-11 12:02:41.334831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.896 [2024-10-11 12:02:41.334850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.896 [2024-10-11 12:02:41.334857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.896 [2024-10-11 12:02:41.335078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.896 [2024-10-11 12:02:41.335297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.896 [2024-10-11 12:02:41.335305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.896 [2024-10-11 12:02:41.335313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.896 [2024-10-11 12:02:41.338867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.896 [2024-10-11 12:02:41.348319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.896 [2024-10-11 12:02:41.348887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.896 [2024-10-11 12:02:41.348907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.896 [2024-10-11 12:02:41.348914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.896 [2024-10-11 12:02:41.349135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.896 [2024-10-11 12:02:41.349355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.896 [2024-10-11 12:02:41.349364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.896 [2024-10-11 12:02:41.349372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.896 [2024-10-11 12:02:41.352931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.896 [2024-10-11 12:02:41.362152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.896 [2024-10-11 12:02:41.362717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.362736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.362749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.362969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.363188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.363197] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.363204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.366758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.375969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.897 [2024-10-11 12:02:41.376517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.376536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.376544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.376771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.376991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.377000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.377007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.380555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.389769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.897 [2024-10-11 12:02:41.390358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.390377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.390385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.390605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.390834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.390844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.390851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.394405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.403634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.897 [2024-10-11 12:02:41.404217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.404275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.404288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.404541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.404780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.404798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.404806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.408389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.417636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.897 [2024-10-11 12:02:41.418337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.418401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.418414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.418682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.418910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.418920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.418928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.422507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.431557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.897 [2024-10-11 12:02:41.432167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.432228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.432241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.432498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.432741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.432753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.432762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.436343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.445374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.897 [2024-10-11 12:02:41.445989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.446053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.446068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.446323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.446550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.446559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.446569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.450164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.459208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.897 [2024-10-11 12:02:41.460742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.460793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.460807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.461064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.461291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.461304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.461313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.464899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.473086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.897 [2024-10-11 12:02:41.473709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.473735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.473745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.473970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.474193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.474202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.474210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.477781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.487015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.897 [2024-10-11 12:02:41.487662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.897 [2024-10-11 12:02:41.487734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.897 [2024-10-11 12:02:41.487747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.897 [2024-10-11 12:02:41.488003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.897 [2024-10-11 12:02:41.488230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.897 [2024-10-11 12:02:41.488240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.897 [2024-10-11 12:02:41.488248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.897 [2024-10-11 12:02:41.491826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.897 [2024-10-11 12:02:41.500860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.898 [2024-10-11 12:02:41.501455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.898 [2024-10-11 12:02:41.501486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.898 [2024-10-11 12:02:41.501495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.898 [2024-10-11 12:02:41.501736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.898 [2024-10-11 12:02:41.501961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.898 [2024-10-11 12:02:41.501972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.898 [2024-10-11 12:02:41.501980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.898 [2024-10-11 12:02:41.505558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.898 9826.67 IOPS, 38.39 MiB/s [2024-10-11T10:02:41.530Z] [2024-10-11 12:02:41.514802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:56.898 [2024-10-11 12:02:41.515384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.898 [2024-10-11 12:02:41.515411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:56.898 [2024-10-11 12:02:41.515419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:56.898 [2024-10-11 12:02:41.515642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:56.898 [2024-10-11 12:02:41.515872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.898 [2024-10-11 12:02:41.515883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.898 [2024-10-11 12:02:41.515891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.519457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.528681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.529277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.529301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.160 [2024-10-11 12:02:41.529309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.160 [2024-10-11 12:02:41.529531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.160 [2024-10-11 12:02:41.529761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.160 [2024-10-11 12:02:41.529773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.160 [2024-10-11 12:02:41.529781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.533360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.542582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.543260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.543322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.160 [2024-10-11 12:02:41.543336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.160 [2024-10-11 12:02:41.543591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.160 [2024-10-11 12:02:41.543832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.160 [2024-10-11 12:02:41.543842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.160 [2024-10-11 12:02:41.543859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.547438] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.556516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.557141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.557168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.160 [2024-10-11 12:02:41.557177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.160 [2024-10-11 12:02:41.557401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.160 [2024-10-11 12:02:41.557623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.160 [2024-10-11 12:02:41.557631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.160 [2024-10-11 12:02:41.557639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.561219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.570451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.571149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.571211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.160 [2024-10-11 12:02:41.571224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.160 [2024-10-11 12:02:41.571480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.160 [2024-10-11 12:02:41.571721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.160 [2024-10-11 12:02:41.571731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.160 [2024-10-11 12:02:41.571740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.575320] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.584345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.584972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.585003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.160 [2024-10-11 12:02:41.585012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.160 [2024-10-11 12:02:41.585236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.160 [2024-10-11 12:02:41.585458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.160 [2024-10-11 12:02:41.585468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.160 [2024-10-11 12:02:41.585475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.589052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.598282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.598853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.598885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.160 [2024-10-11 12:02:41.598894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.160 [2024-10-11 12:02:41.599129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.160 [2024-10-11 12:02:41.599350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.160 [2024-10-11 12:02:41.599359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.160 [2024-10-11 12:02:41.599366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.602939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.612432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.613149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.613211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.160 [2024-10-11 12:02:41.613224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.160 [2024-10-11 12:02:41.613480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.160 [2024-10-11 12:02:41.613718] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.160 [2024-10-11 12:02:41.613728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.160 [2024-10-11 12:02:41.613737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.617313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.626331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.627068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.627131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.160 [2024-10-11 12:02:41.627144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.160 [2024-10-11 12:02:41.627400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.160 [2024-10-11 12:02:41.627627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.160 [2024-10-11 12:02:41.627638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.160 [2024-10-11 12:02:41.627646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.631248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.640270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.640779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.640809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.160 [2024-10-11 12:02:41.640818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.160 [2024-10-11 12:02:41.641042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.160 [2024-10-11 12:02:41.641271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.160 [2024-10-11 12:02:41.641280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.160 [2024-10-11 12:02:41.641287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.160 [2024-10-11 12:02:41.644857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.160 [2024-10-11 12:02:41.654074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.160 [2024-10-11 12:02:41.654758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.160 [2024-10-11 12:02:41.654822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.654836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.655092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.655320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.655330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.655338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.658934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.161 [2024-10-11 12:02:41.667967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.161 [2024-10-11 12:02:41.668604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.161 [2024-10-11 12:02:41.668632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.668641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.668874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.669097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.669107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.669114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.672677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.161 [2024-10-11 12:02:41.681890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.161 [2024-10-11 12:02:41.682585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.161 [2024-10-11 12:02:41.682647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.682660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.682927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.683156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.683165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.683174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.686758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.161 [2024-10-11 12:02:41.695774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.161 [2024-10-11 12:02:41.696476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.161 [2024-10-11 12:02:41.696539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.696553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.696823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.697053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.697063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.697072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.700663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.161 [2024-10-11 12:02:41.709750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.161 [2024-10-11 12:02:41.710435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.161 [2024-10-11 12:02:41.710497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.710510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.710778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.711005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.711015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.711023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.714593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.161 [2024-10-11 12:02:41.723612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.161 [2024-10-11 12:02:41.724341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.161 [2024-10-11 12:02:41.724403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.724416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.724687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.724914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.724923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.724932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.728502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.161 [2024-10-11 12:02:41.737531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.161 [2024-10-11 12:02:41.738248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.161 [2024-10-11 12:02:41.738309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.738329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.738586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.738826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.738836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.738845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.742421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.161 [2024-10-11 12:02:41.751440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.161 [2024-10-11 12:02:41.752048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.161 [2024-10-11 12:02:41.752106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.752120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.752376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.752603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.752612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.752620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.756207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.161 [2024-10-11 12:02:41.765495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.161 [2024-10-11 12:02:41.766144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.161 [2024-10-11 12:02:41.766173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.766182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.766406] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.766627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.766637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.766645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.770224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.161 [2024-10-11 12:02:41.779438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.161 [2024-10-11 12:02:41.780016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.161 [2024-10-11 12:02:41.780039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.161 [2024-10-11 12:02:41.780048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.161 [2024-10-11 12:02:41.780269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.161 [2024-10-11 12:02:41.780490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.161 [2024-10-11 12:02:41.780507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.161 [2024-10-11 12:02:41.780515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.161 [2024-10-11 12:02:41.784087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.793315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.794012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.794075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.794088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.794344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.794570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.424 [2024-10-11 12:02:41.794580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.424 [2024-10-11 12:02:41.794588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.424 [2024-10-11 12:02:41.798177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.807203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.807811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.807873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.807888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.808144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.808378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.424 [2024-10-11 12:02:41.808390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.424 [2024-10-11 12:02:41.808399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.424 [2024-10-11 12:02:41.811985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.821217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.821976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.822038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.822051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.822307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.822534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.424 [2024-10-11 12:02:41.822543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.424 [2024-10-11 12:02:41.822551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.424 [2024-10-11 12:02:41.826141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.835203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.835970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.836033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.836046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.836302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.836531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.424 [2024-10-11 12:02:41.836542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.424 [2024-10-11 12:02:41.836550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.424 [2024-10-11 12:02:41.840132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.849142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.849823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.849885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.849898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.850154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.850382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.424 [2024-10-11 12:02:41.850391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.424 [2024-10-11 12:02:41.850400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.424 [2024-10-11 12:02:41.853985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.863008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.863739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.863802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.863816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.864073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.864299] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.424 [2024-10-11 12:02:41.864309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.424 [2024-10-11 12:02:41.864317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.424 [2024-10-11 12:02:41.867902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.876910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.877586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.877647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.877659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.877935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.878163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.424 [2024-10-11 12:02:41.878172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.424 [2024-10-11 12:02:41.878181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.424 [2024-10-11 12:02:41.881756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.890784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.891475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.891536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.891550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.891820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.892048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.424 [2024-10-11 12:02:41.892057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.424 [2024-10-11 12:02:41.892065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.424 [2024-10-11 12:02:41.895631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.904652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.905374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.905436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.905450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.905720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.905949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.424 [2024-10-11 12:02:41.905958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.424 [2024-10-11 12:02:41.905966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.424 [2024-10-11 12:02:41.909544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.424 [2024-10-11 12:02:41.918578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.424 [2024-10-11 12:02:41.919259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.424 [2024-10-11 12:02:41.919321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.424 [2024-10-11 12:02:41.919334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.424 [2024-10-11 12:02:41.919589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.424 [2024-10-11 12:02:41.919835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:41.919846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:41.919861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:41.923445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.425 [2024-10-11 12:02:41.932469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.425 [2024-10-11 12:02:41.933089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.425 [2024-10-11 12:02:41.933151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.425 [2024-10-11 12:02:41.933164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.425 [2024-10-11 12:02:41.933420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.425 [2024-10-11 12:02:41.933647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:41.933657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:41.933665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:41.937252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.425 [2024-10-11 12:02:41.946472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.425 [2024-10-11 12:02:41.947159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.425 [2024-10-11 12:02:41.947221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.425 [2024-10-11 12:02:41.947234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.425 [2024-10-11 12:02:41.947491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.425 [2024-10-11 12:02:41.947735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:41.947745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:41.947754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:41.951339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.425 [2024-10-11 12:02:41.960416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.425 [2024-10-11 12:02:41.961063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.425 [2024-10-11 12:02:41.961090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.425 [2024-10-11 12:02:41.961100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.425 [2024-10-11 12:02:41.961324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.425 [2024-10-11 12:02:41.961545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:41.961553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:41.961561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:41.965144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.425 [2024-10-11 12:02:41.974454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.425 [2024-10-11 12:02:41.975091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.425 [2024-10-11 12:02:41.975128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.425 [2024-10-11 12:02:41.975137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.425 [2024-10-11 12:02:41.975361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.425 [2024-10-11 12:02:41.975582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:41.975602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:41.975614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:41.979198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.425 [2024-10-11 12:02:41.988444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.425 [2024-10-11 12:02:41.989041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.425 [2024-10-11 12:02:41.989065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.425 [2024-10-11 12:02:41.989073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.425 [2024-10-11 12:02:41.989297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.425 [2024-10-11 12:02:41.989517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:41.989527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:41.989534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:41.993107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.425 [2024-10-11 12:02:42.002348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.425 [2024-10-11 12:02:42.002943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.425 [2024-10-11 12:02:42.003006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.425 [2024-10-11 12:02:42.003019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.425 [2024-10-11 12:02:42.003275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.425 [2024-10-11 12:02:42.003501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:42.003511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:42.003520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:42.007114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.425 [2024-10-11 12:02:42.016155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.425 [2024-10-11 12:02:42.016914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.425 [2024-10-11 12:02:42.016977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.425 [2024-10-11 12:02:42.016991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.425 [2024-10-11 12:02:42.017246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.425 [2024-10-11 12:02:42.017481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:42.017491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:42.017499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:42.021082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.425 [2024-10-11 12:02:42.030008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.425 [2024-10-11 12:02:42.030665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.425 [2024-10-11 12:02:42.030748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.425 [2024-10-11 12:02:42.030760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.425 [2024-10-11 12:02:42.031017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.425 [2024-10-11 12:02:42.031244] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:42.031253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:42.031261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:42.034845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.425 [2024-10-11 12:02:42.043848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.425 [2024-10-11 12:02:42.044571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.425 [2024-10-11 12:02:42.044632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.425 [2024-10-11 12:02:42.044646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.425 [2024-10-11 12:02:42.044915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.425 [2024-10-11 12:02:42.045144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.425 [2024-10-11 12:02:42.045153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.425 [2024-10-11 12:02:42.045162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.425 [2024-10-11 12:02:42.048735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.687 [2024-10-11 12:02:42.057773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.687 [2024-10-11 12:02:42.058372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.687 [2024-10-11 12:02:42.058401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.687 [2024-10-11 12:02:42.058410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.687 [2024-10-11 12:02:42.058634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.687 [2024-10-11 12:02:42.058888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.687 [2024-10-11 12:02:42.058900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.687 [2024-10-11 12:02:42.058909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.687 [2024-10-11 12:02:42.062496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.687 [2024-10-11 12:02:42.071715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.687 [2024-10-11 12:02:42.072380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.687 [2024-10-11 12:02:42.072441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.687 [2024-10-11 12:02:42.072454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.687 [2024-10-11 12:02:42.072723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.687 [2024-10-11 12:02:42.072951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.687 [2024-10-11 12:02:42.072961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.687 [2024-10-11 12:02:42.072969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.687 [2024-10-11 12:02:42.076540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.687 [2024-10-11 12:02:42.085551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.687 [2024-10-11 12:02:42.086227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.687 [2024-10-11 12:02:42.086290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.687 [2024-10-11 12:02:42.086303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.687 [2024-10-11 12:02:42.086559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.687 [2024-10-11 12:02:42.086798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.687 [2024-10-11 12:02:42.086808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.687 [2024-10-11 12:02:42.086816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.687 [2024-10-11 12:02:42.090463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.687 [2024-10-11 12:02:42.099520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.687 [2024-10-11 12:02:42.100224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.687 [2024-10-11 12:02:42.100288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.687 [2024-10-11 12:02:42.100301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.687 [2024-10-11 12:02:42.100557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.687 [2024-10-11 12:02:42.100814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.687 [2024-10-11 12:02:42.100826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.687 [2024-10-11 12:02:42.100834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.687 [2024-10-11 12:02:42.104417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.687 [2024-10-11 12:02:42.112230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.687 [2024-10-11 12:02:42.112768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.687 [2024-10-11 12:02:42.112794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.687 [2024-10-11 12:02:42.112808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.687 [2024-10-11 12:02:42.112965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.687 [2024-10-11 12:02:42.113119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.687 [2024-10-11 12:02:42.113126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.687 [2024-10-11 12:02:42.113132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.687 [2024-10-11 12:02:42.115589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.687 [2024-10-11 12:02:42.124949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.687 [2024-10-11 12:02:42.125470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.687 [2024-10-11 12:02:42.125489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.687 [2024-10-11 12:02:42.125496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.125649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.125812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.125819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.125824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.128273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.137640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.138126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.138144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.138150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.138303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.138455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.138462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.138468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.140917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.150255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.150820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.150862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.150871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.151045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.151201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.151218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.151225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.153687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.162878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.163455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.163495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.163504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.163687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.163843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.163849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.163855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.166300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.175623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.176084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.176102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.176108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.176260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.176411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.176417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.176423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.178895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.188360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.188958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.188993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.189001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.189171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.189326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.189332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.189337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.191791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.200979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.201579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.201613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.201623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.201801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.201955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.201962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.201968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.204409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.213593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.214120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.214136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.214142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.214294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.214445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.214450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.214455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.216893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.226293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.226781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.226796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.226802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.226953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.227104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.227110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.227115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.229548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.239021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.239551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.239582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.239591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.239768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.239923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.688 [2024-10-11 12:02:42.239929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.688 [2024-10-11 12:02:42.239935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.688 [2024-10-11 12:02:42.242371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.688 [2024-10-11 12:02:42.251687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.688 [2024-10-11 12:02:42.252251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.688 [2024-10-11 12:02:42.252281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.688 [2024-10-11 12:02:42.252290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.688 [2024-10-11 12:02:42.252456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.688 [2024-10-11 12:02:42.252609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.689 [2024-10-11 12:02:42.252615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.689 [2024-10-11 12:02:42.252621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.689 [2024-10-11 12:02:42.255062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.689 [2024-10-11 12:02:42.264378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.689 [2024-10-11 12:02:42.264954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.689 [2024-10-11 12:02:42.264984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.689 [2024-10-11 12:02:42.264993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.689 [2024-10-11 12:02:42.265159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.689 [2024-10-11 12:02:42.265313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.689 [2024-10-11 12:02:42.265319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.689 [2024-10-11 12:02:42.265324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.689 [2024-10-11 12:02:42.267766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.689 [2024-10-11 12:02:42.277076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.689 [2024-10-11 12:02:42.277619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.689 [2024-10-11 12:02:42.277649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.689 [2024-10-11 12:02:42.277658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.689 [2024-10-11 12:02:42.277831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.689 [2024-10-11 12:02:42.277985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.689 [2024-10-11 12:02:42.277991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.689 [2024-10-11 12:02:42.278000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.689 [2024-10-11 12:02:42.280437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.689 [2024-10-11 12:02:42.289753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.689 [2024-10-11 12:02:42.290328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.689 [2024-10-11 12:02:42.290358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.689 [2024-10-11 12:02:42.290367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.689 [2024-10-11 12:02:42.290534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.689 [2024-10-11 12:02:42.290693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.689 [2024-10-11 12:02:42.290700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.689 [2024-10-11 12:02:42.290706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.689 [2024-10-11 12:02:42.293141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.689 [2024-10-11 12:02:42.302458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.689 [2024-10-11 12:02:42.303046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.689 [2024-10-11 12:02:42.303076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.689 [2024-10-11 12:02:42.303085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.689 [2024-10-11 12:02:42.303251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.689 [2024-10-11 12:02:42.303405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.689 [2024-10-11 12:02:42.303411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.689 [2024-10-11 12:02:42.303416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.689 [2024-10-11 12:02:42.305857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.689 [2024-10-11 12:02:42.315188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.689 [2024-10-11 12:02:42.315749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.689 [2024-10-11 12:02:42.315779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.689 [2024-10-11 12:02:42.315787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.689 [2024-10-11 12:02:42.315954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.689 [2024-10-11 12:02:42.316108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.689 [2024-10-11 12:02:42.316114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.689 [2024-10-11 12:02:42.316120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.318568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.327894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.328462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.328496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.328504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.951 [2024-10-11 12:02:42.328679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.951 [2024-10-11 12:02:42.328834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.951 [2024-10-11 12:02:42.328840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.951 [2024-10-11 12:02:42.328846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.331290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.340605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.341175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.341205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.341213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.951 [2024-10-11 12:02:42.341380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.951 [2024-10-11 12:02:42.341534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.951 [2024-10-11 12:02:42.341540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.951 [2024-10-11 12:02:42.341545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.343989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.353304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.353879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.353909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.353918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.951 [2024-10-11 12:02:42.354084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.951 [2024-10-11 12:02:42.354238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.951 [2024-10-11 12:02:42.354244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.951 [2024-10-11 12:02:42.354249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.356694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.366016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.366592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.366622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.366631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.951 [2024-10-11 12:02:42.366805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.951 [2024-10-11 12:02:42.366963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.951 [2024-10-11 12:02:42.366969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.951 [2024-10-11 12:02:42.366975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.369409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.378735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.379193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.379223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.379232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.951 [2024-10-11 12:02:42.379398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.951 [2024-10-11 12:02:42.379552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.951 [2024-10-11 12:02:42.379558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.951 [2024-10-11 12:02:42.379564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.382010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.391346] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.391925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.391955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.391965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.951 [2024-10-11 12:02:42.392131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.951 [2024-10-11 12:02:42.392284] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.951 [2024-10-11 12:02:42.392291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.951 [2024-10-11 12:02:42.392296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.394739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.404059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.404634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.404664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.404680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.951 [2024-10-11 12:02:42.404846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.951 [2024-10-11 12:02:42.405000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.951 [2024-10-11 12:02:42.405006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.951 [2024-10-11 12:02:42.405011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.407452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.416778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.417253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.417268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.417274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.951 [2024-10-11 12:02:42.417426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.951 [2024-10-11 12:02:42.417576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.951 [2024-10-11 12:02:42.417582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.951 [2024-10-11 12:02:42.417587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.420023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.429475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.429960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.429973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.429979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.951 [2024-10-11 12:02:42.430130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.951 [2024-10-11 12:02:42.430288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.951 [2024-10-11 12:02:42.430294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.951 [2024-10-11 12:02:42.430299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.951 [2024-10-11 12:02:42.432739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.951 [2024-10-11 12:02:42.442202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.951 [2024-10-11 12:02:42.442767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.951 [2024-10-11 12:02:42.442797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.951 [2024-10-11 12:02:42.442805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.442972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.443125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.443132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.443137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 [2024-10-11 12:02:42.445581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.454899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.455442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.455473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.455485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.455652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.455814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.455821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.455826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 [2024-10-11 12:02:42.458266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.467608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.468102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.468117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.468123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.468274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.468424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.468430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.468435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 [2024-10-11 12:02:42.470872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.480350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.480895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.480925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.480934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.481100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.481253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.481260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.481265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 [2024-10-11 12:02:42.483707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.493021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.493514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.493529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.493534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.493690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.493841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.493850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.493855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 [2024-10-11 12:02:42.496287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.505750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.506303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.506333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.506342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.506509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.506662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.506677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.506683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 7370.00 IOPS, 28.79 MiB/s [2024-10-11T10:02:42.584Z] [2024-10-11 12:02:42.510257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.518432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.519000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.519030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.519039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.519205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.519361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.519368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.519373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 [2024-10-11 12:02:42.521818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.531140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.531705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.531735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.531744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.531910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.532064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.532071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.532076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 [2024-10-11 12:02:42.534520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.543843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.544418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.544449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.544458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.544624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.544786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.544793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.544798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 [2024-10-11 12:02:42.547240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.556566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.557125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.557155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.557163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.557329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.557483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.557489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.557495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.952 [2024-10-11 12:02:42.559944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:57.952 [2024-10-11 12:02:42.569257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.952 [2024-10-11 12:02:42.569889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.952 [2024-10-11 12:02:42.569919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:57.952 [2024-10-11 12:02:42.569928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:57.952 [2024-10-11 12:02:42.570095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:57.952 [2024-10-11 12:02:42.570248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:57.952 [2024-10-11 12:02:42.570255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:57.952 [2024-10-11 12:02:42.570260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:57.953 [2024-10-11 12:02:42.572703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.215 [2024-10-11 12:02:42.581878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.215 [2024-10-11 12:02:42.582366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.215 [2024-10-11 12:02:42.582381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.215 [2024-10-11 12:02:42.582386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.215 [2024-10-11 12:02:42.582541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.215 [2024-10-11 12:02:42.582699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.215 [2024-10-11 12:02:42.582705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.215 [2024-10-11 12:02:42.582710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.215 [2024-10-11 12:02:42.585143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.215 [2024-10-11 12:02:42.594625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.215 [2024-10-11 12:02:42.595168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.215 [2024-10-11 12:02:42.595199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.215 [2024-10-11 12:02:42.595208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.215 [2024-10-11 12:02:42.595375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.215 [2024-10-11 12:02:42.595529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.215 [2024-10-11 12:02:42.595535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.215 [2024-10-11 12:02:42.595540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.215 [2024-10-11 12:02:42.597985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.215 [2024-10-11 12:02:42.607309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.215 [2024-10-11 12:02:42.607879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.215 [2024-10-11 12:02:42.607909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.215 [2024-10-11 12:02:42.607918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.215 [2024-10-11 12:02:42.608085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.215 [2024-10-11 12:02:42.608238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.215 [2024-10-11 12:02:42.608245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.215 [2024-10-11 12:02:42.608250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.215 [2024-10-11 12:02:42.610858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.215 [2024-10-11 12:02:42.620041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.215 [2024-10-11 12:02:42.620612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.215 [2024-10-11 12:02:42.620642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.215 [2024-10-11 12:02:42.620651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.215 [2024-10-11 12:02:42.620826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.215 [2024-10-11 12:02:42.620981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.215 [2024-10-11 12:02:42.620987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.215 [2024-10-11 12:02:42.620997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.215 [2024-10-11 12:02:42.623435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.215 [2024-10-11 12:02:42.632763] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.215 [2024-10-11 12:02:42.633331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.215 [2024-10-11 12:02:42.633361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.215 [2024-10-11 12:02:42.633370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.215 [2024-10-11 12:02:42.633537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.215 [2024-10-11 12:02:42.633697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.215 [2024-10-11 12:02:42.633704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.215 [2024-10-11 12:02:42.633709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.215 [2024-10-11 12:02:42.636147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.215 [2024-10-11 12:02:42.645455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.215 [2024-10-11 12:02:42.646013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.215 [2024-10-11 12:02:42.646044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.646052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.216 [2024-10-11 12:02:42.646219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.216 [2024-10-11 12:02:42.646372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.216 [2024-10-11 12:02:42.646379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.216 [2024-10-11 12:02:42.646384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.216 [2024-10-11 12:02:42.648828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.216 [2024-10-11 12:02:42.658143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.216 [2024-10-11 12:02:42.658715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.216 [2024-10-11 12:02:42.658746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.658755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.216 [2024-10-11 12:02:42.658924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.216 [2024-10-11 12:02:42.659078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.216 [2024-10-11 12:02:42.659085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.216 [2024-10-11 12:02:42.659090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.216 [2024-10-11 12:02:42.661535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.216 [2024-10-11 12:02:42.670895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.216 [2024-10-11 12:02:42.671394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.216 [2024-10-11 12:02:42.671409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.671414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.216 [2024-10-11 12:02:42.671566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.216 [2024-10-11 12:02:42.671723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.216 [2024-10-11 12:02:42.671729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.216 [2024-10-11 12:02:42.671734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.216 [2024-10-11 12:02:42.674168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.216 [2024-10-11 12:02:42.683634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.216 [2024-10-11 12:02:42.684081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.216 [2024-10-11 12:02:42.684093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.684099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.216 [2024-10-11 12:02:42.684249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.216 [2024-10-11 12:02:42.684400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.216 [2024-10-11 12:02:42.684405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.216 [2024-10-11 12:02:42.684410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.216 [2024-10-11 12:02:42.686846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.216 [2024-10-11 12:02:42.696308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.216 [2024-10-11 12:02:42.696707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.216 [2024-10-11 12:02:42.696727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.696733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.216 [2024-10-11 12:02:42.696889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.216 [2024-10-11 12:02:42.697041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.216 [2024-10-11 12:02:42.697047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.216 [2024-10-11 12:02:42.697052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.216 [2024-10-11 12:02:42.699485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.216 [2024-10-11 12:02:42.708944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.216 [2024-10-11 12:02:42.709307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.216 [2024-10-11 12:02:42.709321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.709326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.216 [2024-10-11 12:02:42.709477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.216 [2024-10-11 12:02:42.709631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.216 [2024-10-11 12:02:42.709639] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.216 [2024-10-11 12:02:42.709649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.216 [2024-10-11 12:02:42.712083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.216 [2024-10-11 12:02:42.721679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.216 [2024-10-11 12:02:42.722158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.216 [2024-10-11 12:02:42.722170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.722176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.216 [2024-10-11 12:02:42.722326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.216 [2024-10-11 12:02:42.722477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.216 [2024-10-11 12:02:42.722483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.216 [2024-10-11 12:02:42.722488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.216 [2024-10-11 12:02:42.724922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.216 [2024-10-11 12:02:42.734374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.216 [2024-10-11 12:02:42.734965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.216 [2024-10-11 12:02:42.734995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.735004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.216 [2024-10-11 12:02:42.735170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.216 [2024-10-11 12:02:42.735324] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.216 [2024-10-11 12:02:42.735330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.216 [2024-10-11 12:02:42.735336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.216 [2024-10-11 12:02:42.737777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.216 [2024-10-11 12:02:42.747084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.216 [2024-10-11 12:02:42.747554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.216 [2024-10-11 12:02:42.747568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.747574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.216 [2024-10-11 12:02:42.747728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.216 [2024-10-11 12:02:42.747879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.216 [2024-10-11 12:02:42.747885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.216 [2024-10-11 12:02:42.747890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.216 [2024-10-11 12:02:42.750324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.216 [2024-10-11 12:02:42.759788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.216 [2024-10-11 12:02:42.760263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.216 [2024-10-11 12:02:42.760275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.216 [2024-10-11 12:02:42.760280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.217 [2024-10-11 12:02:42.760431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.217 [2024-10-11 12:02:42.760583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.217 [2024-10-11 12:02:42.760588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.217 [2024-10-11 12:02:42.760593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.217 [2024-10-11 12:02:42.763046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.217 [2024-10-11 12:02:42.772507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.217 [2024-10-11 12:02:42.773050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.217 [2024-10-11 12:02:42.773080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.217 [2024-10-11 12:02:42.773089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.217 [2024-10-11 12:02:42.773258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.217 [2024-10-11 12:02:42.773412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.217 [2024-10-11 12:02:42.773419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.217 [2024-10-11 12:02:42.773424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.217 [2024-10-11 12:02:42.775867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.217 [2024-10-11 12:02:42.785189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.217 [2024-10-11 12:02:42.785685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.217 [2024-10-11 12:02:42.785700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.217 [2024-10-11 12:02:42.785706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.217 [2024-10-11 12:02:42.785857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.217 [2024-10-11 12:02:42.786008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.217 [2024-10-11 12:02:42.786014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.217 [2024-10-11 12:02:42.786019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.217 [2024-10-11 12:02:42.788453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.217 [2024-10-11 12:02:42.797929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.217 [2024-10-11 12:02:42.798420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.217 [2024-10-11 12:02:42.798432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.217 [2024-10-11 12:02:42.798444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.217 [2024-10-11 12:02:42.798595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.217 [2024-10-11 12:02:42.798752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.217 [2024-10-11 12:02:42.798758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.217 [2024-10-11 12:02:42.798763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.217 [2024-10-11 12:02:42.801193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.217 [2024-10-11 12:02:42.810557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.217 [2024-10-11 12:02:42.811094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.217 [2024-10-11 12:02:42.811108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.217 [2024-10-11 12:02:42.811113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.217 [2024-10-11 12:02:42.811264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.217 [2024-10-11 12:02:42.811415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.217 [2024-10-11 12:02:42.811421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.217 [2024-10-11 12:02:42.811426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.217 [2024-10-11 12:02:42.813863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.217 [2024-10-11 12:02:42.823181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.217 [2024-10-11 12:02:42.823664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.217 [2024-10-11 12:02:42.823680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.217 [2024-10-11 12:02:42.823686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.217 [2024-10-11 12:02:42.823836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.217 [2024-10-11 12:02:42.823987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.217 [2024-10-11 12:02:42.823992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.217 [2024-10-11 12:02:42.823997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.217 [2024-10-11 12:02:42.826428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.217 [2024-10-11 12:02:42.835904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.217 [2024-10-11 12:02:42.836391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.217 [2024-10-11 12:02:42.836403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.217 [2024-10-11 12:02:42.836409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.217 [2024-10-11 12:02:42.836559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.217 [2024-10-11 12:02:42.836716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.217 [2024-10-11 12:02:42.836726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.217 [2024-10-11 12:02:42.836731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.217 [2024-10-11 12:02:42.839165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.479 [2024-10-11 12:02:42.848631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.479 [2024-10-11 12:02:42.849215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.849245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.849254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.849421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.849574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.849581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.849586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.852030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.861266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.861653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.861671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.861677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.861829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.861979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.861985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.861990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.864424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.873895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.874260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.874272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.874278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.874428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.874578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.874584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.874589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.877028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.886640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.887181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.887212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.887221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.887387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.887541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.887548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.887553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.889997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.899327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.899683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.899702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.899708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.899860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.900010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.900016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.900021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.902459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.912086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.912545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.912558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.912563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.912720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.912871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.912877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.912883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.915315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.924787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.925273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.925284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.925290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.925444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.925595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.925601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.925606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.928045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.937517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.937995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.938008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.938013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.938163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.938314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.938319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.938324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.940760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.950217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.950664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.950681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.950687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.950837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.950987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.950993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.950998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.953431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.962903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.963468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.963498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.963507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.963679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.963833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.963840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.480 [2024-10-11 12:02:42.963850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.480 [2024-10-11 12:02:42.966290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.480 [2024-10-11 12:02:42.975608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.480 [2024-10-11 12:02:42.976068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.480 [2024-10-11 12:02:42.976083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.480 [2024-10-11 12:02:42.976089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.480 [2024-10-11 12:02:42.976240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.480 [2024-10-11 12:02:42.976391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.480 [2024-10-11 12:02:42.976397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:42.976402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:42.978834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:42.988298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:42.988742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:42.988755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:42.988761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:42.988911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:42.989062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:42.989068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:42.989073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:42.991503] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:43.000963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:43.001410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:43.001421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:43.001427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:43.001577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:43.001734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:43.001740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:43.001745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:43.004181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:43.013671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:43.014128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:43.014141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:43.014147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:43.014298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:43.014448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:43.014454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:43.014459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:43.016893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:43.026345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:43.026731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:43.026743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:43.026748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:43.026899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:43.027049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:43.027055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:43.027060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:43.029488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:43.039095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:43.039541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:43.039553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:43.039558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:43.039714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:43.039865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:43.039870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:43.039875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:43.042304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:43.051836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:43.052454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:43.052484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:43.052493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:43.052664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:43.052825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:43.052832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:43.052837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:43.055274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:43.064457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:43.064995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:43.065026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:43.065034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:43.065201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:43.065354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:43.065361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:43.065366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:43.067812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:43.077137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:43.077725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:43.077756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:43.077765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:43.077934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:43.078087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:43.078094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:43.078099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:43.080542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:43.089868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:43.090435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:43.090465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:43.090474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:43.090640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:43.090799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:43.090806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:43.090811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:43.093251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.481 [2024-10-11 12:02:43.102575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.481 [2024-10-11 12:02:43.102918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.481 [2024-10-11 12:02:43.102933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.481 [2024-10-11 12:02:43.102939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.481 [2024-10-11 12:02:43.103090] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.481 [2024-10-11 12:02:43.103242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.481 [2024-10-11 12:02:43.103247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.481 [2024-10-11 12:02:43.103253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.481 [2024-10-11 12:02:43.105697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.743 [2024-10-11 12:02:43.115317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.743 [2024-10-11 12:02:43.115807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-10-11 12:02:43.115821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.743 [2024-10-11 12:02:43.115826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.743 [2024-10-11 12:02:43.115977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.743 [2024-10-11 12:02:43.116129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.743 [2024-10-11 12:02:43.116135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.743 [2024-10-11 12:02:43.116140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.743 [2024-10-11 12:02:43.118572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.743 [2024-10-11 12:02:43.128038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.743 [2024-10-11 12:02:43.128522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-10-11 12:02:43.128534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.743 [2024-10-11 12:02:43.128540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.743 [2024-10-11 12:02:43.128694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.743 [2024-10-11 12:02:43.128846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.743 [2024-10-11 12:02:43.128852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.743 [2024-10-11 12:02:43.128856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.743 [2024-10-11 12:02:43.131296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.743 [2024-10-11 12:02:43.140755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.743 [2024-10-11 12:02:43.141207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-10-11 12:02:43.141221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.743 [2024-10-11 12:02:43.141227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.141377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.141528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.141533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.141538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.143973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.153428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.153983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.154013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.154022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.154188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.154342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.154348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.154354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.156797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.166135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.166740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.166771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.166780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.166946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.167100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.167107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.167112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.169556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.178890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.179356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.179370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.179376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.179527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.179685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.179691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.179697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.182131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.191596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.192040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.192053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.192058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.192209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.192359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.192365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.192370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.194805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.204275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.204762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.204775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.204780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.204931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.205082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.205087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.205092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.207526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.217000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.217460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.217473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.217479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.217630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.217786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.217794] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.217799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.220254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.229725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.230283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.230314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.230323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.230489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.230643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.230649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.230655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.233105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.242426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.242946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.242961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.242967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.243118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.243269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.243275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.243280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.245717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.255104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.255711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.255742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.255751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.255919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.256074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.256080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.256086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.258528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.267722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.268195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-10-11 12:02:43.268211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.744 [2024-10-11 12:02:43.268220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.744 [2024-10-11 12:02:43.268371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.744 [2024-10-11 12:02:43.268523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.744 [2024-10-11 12:02:43.268528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.744 [2024-10-11 12:02:43.268534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.744 [2024-10-11 12:02:43.270972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.744 [2024-10-11 12:02:43.280436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.744 [2024-10-11 12:02:43.280906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-10-11 12:02:43.280920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.745 [2024-10-11 12:02:43.280925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.745 [2024-10-11 12:02:43.281076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.745 [2024-10-11 12:02:43.281226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.745 [2024-10-11 12:02:43.281232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.745 [2024-10-11 12:02:43.281237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.745 [2024-10-11 12:02:43.283685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.745 [2024-10-11 12:02:43.293150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.745 [2024-10-11 12:02:43.293629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-10-11 12:02:43.293641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.745 [2024-10-11 12:02:43.293646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.745 [2024-10-11 12:02:43.293801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.745 [2024-10-11 12:02:43.293952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.745 [2024-10-11 12:02:43.293958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.745 [2024-10-11 12:02:43.293963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.745 [2024-10-11 12:02:43.296394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.745 [2024-10-11 12:02:43.305866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.745 [2024-10-11 12:02:43.306478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-10-11 12:02:43.306508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.745 [2024-10-11 12:02:43.306517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.745 [2024-10-11 12:02:43.306690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.745 [2024-10-11 12:02:43.306844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.745 [2024-10-11 12:02:43.306850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.745 [2024-10-11 12:02:43.306860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.745 [2024-10-11 12:02:43.309297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.745 [2024-10-11 12:02:43.318482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.745 [2024-10-11 12:02:43.318974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-10-11 12:02:43.318990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.745 [2024-10-11 12:02:43.318995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.745 [2024-10-11 12:02:43.319147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.745 [2024-10-11 12:02:43.319297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.745 [2024-10-11 12:02:43.319303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.745 [2024-10-11 12:02:43.319307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.745 [2024-10-11 12:02:43.321743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.745 [2024-10-11 12:02:43.331210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.745 [2024-10-11 12:02:43.331772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-10-11 12:02:43.331802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.745 [2024-10-11 12:02:43.331812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.745 [2024-10-11 12:02:43.331981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.745 [2024-10-11 12:02:43.332135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.745 [2024-10-11 12:02:43.332142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.745 [2024-10-11 12:02:43.332147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.745 [2024-10-11 12:02:43.334591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.745 [2024-10-11 12:02:43.343920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.745 [2024-10-11 12:02:43.344418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-10-11 12:02:43.344448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.745 [2024-10-11 12:02:43.344457] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.745 [2024-10-11 12:02:43.344626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.745 [2024-10-11 12:02:43.344785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.745 [2024-10-11 12:02:43.344793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.745 [2024-10-11 12:02:43.344798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.745 [2024-10-11 12:02:43.347237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.745 [2024-10-11 12:02:43.356561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.745 [2024-10-11 12:02:43.357137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-10-11 12:02:43.357167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.745 [2024-10-11 12:02:43.357177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.745 [2024-10-11 12:02:43.357345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.745 [2024-10-11 12:02:43.357498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.745 [2024-10-11 12:02:43.357504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.745 [2024-10-11 12:02:43.357510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.745 [2024-10-11 12:02:43.359955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.745 [2024-10-11 12:02:43.369285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.745 [2024-10-11 12:02:43.369766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-10-11 12:02:43.369781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:58.745 [2024-10-11 12:02:43.369787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:58.745 [2024-10-11 12:02:43.369939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:58.745 [2024-10-11 12:02:43.370090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:58.745 [2024-10-11 12:02:43.370095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:58.745 [2024-10-11 12:02:43.370100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.745 [2024-10-11 12:02:43.372530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.007 [2024-10-11 12:02:43.381995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.007 [2024-10-11 12:02:43.382477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.007 [2024-10-11 12:02:43.382489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.007 [2024-10-11 12:02:43.382495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.007 [2024-10-11 12:02:43.382645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.007 [2024-10-11 12:02:43.382801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.007 [2024-10-11 12:02:43.382808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.007 [2024-10-11 12:02:43.382813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.007 [2024-10-11 12:02:43.385244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.007 [2024-10-11 12:02:43.394701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.007 [2024-10-11 12:02:43.395185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.007 [2024-10-11 12:02:43.395197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.007 [2024-10-11 12:02:43.395203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.007 [2024-10-11 12:02:43.395358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.007 [2024-10-11 12:02:43.395508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.007 [2024-10-11 12:02:43.395514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.007 [2024-10-11 12:02:43.395519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.007 [2024-10-11 12:02:43.397954] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.007 [2024-10-11 12:02:43.407417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.007 [2024-10-11 12:02:43.407863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.007 [2024-10-11 12:02:43.407893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.007 [2024-10-11 12:02:43.407902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.007 [2024-10-11 12:02:43.408068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.007 [2024-10-11 12:02:43.408223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.007 [2024-10-11 12:02:43.408230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.007 [2024-10-11 12:02:43.408236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.007 [2024-10-11 12:02:43.410686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.007 [2024-10-11 12:02:43.420156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.007 [2024-10-11 12:02:43.420639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.007 [2024-10-11 12:02:43.420653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.007 [2024-10-11 12:02:43.420659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.007 [2024-10-11 12:02:43.420815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.007 [2024-10-11 12:02:43.420966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.007 [2024-10-11 12:02:43.420972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.007 [2024-10-11 12:02:43.420977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.007 [2024-10-11 12:02:43.423410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.007 [2024-10-11 12:02:43.432905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.007 [2024-10-11 12:02:43.433390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.007 [2024-10-11 12:02:43.433403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.007 [2024-10-11 12:02:43.433409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.007 [2024-10-11 12:02:43.433559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.007 [2024-10-11 12:02:43.433715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.007 [2024-10-11 12:02:43.433722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.007 [2024-10-11 12:02:43.433733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.007 [2024-10-11 12:02:43.436167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.007 [2024-10-11 12:02:43.445614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.007 [2024-10-11 12:02:43.446204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.007 [2024-10-11 12:02:43.446234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.007 [2024-10-11 12:02:43.446243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.007 [2024-10-11 12:02:43.446410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.007 [2024-10-11 12:02:43.446563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.007 [2024-10-11 12:02:43.446570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.007 [2024-10-11 12:02:43.446575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.007 [2024-10-11 12:02:43.449019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.007 [2024-10-11 12:02:43.458334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.007 [2024-10-11 12:02:43.458943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.007 [2024-10-11 12:02:43.458973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.007 [2024-10-11 12:02:43.458982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.007 [2024-10-11 12:02:43.459149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.007 [2024-10-11 12:02:43.459302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.007 [2024-10-11 12:02:43.459309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.007 [2024-10-11 12:02:43.459314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.007 [2024-10-11 12:02:43.461765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.007 [2024-10-11 12:02:43.471079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.007 [2024-10-11 12:02:43.471662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.007 [2024-10-11 12:02:43.471697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.007 [2024-10-11 12:02:43.471706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.007 [2024-10-11 12:02:43.471873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.007 [2024-10-11 12:02:43.472026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.007 [2024-10-11 12:02:43.472033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.007 [2024-10-11 12:02:43.472038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.007 [2024-10-11 12:02:43.474478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.007 [2024-10-11 12:02:43.483802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.007 [2024-10-11 12:02:43.484297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.007 [2024-10-11 12:02:43.484315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.007 [2024-10-11 12:02:43.484321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.007 [2024-10-11 12:02:43.484472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.007 [2024-10-11 12:02:43.484623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.007 [2024-10-11 12:02:43.484629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.007 [2024-10-11 12:02:43.484634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.007 [2024-10-11 12:02:43.487070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.496516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.497011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.497023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.497029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.497180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.497330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.497336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.497340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.499790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.509257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 5896.00 IOPS, 23.03 MiB/s [2024-10-11T10:02:43.640Z] [2024-10-11 12:02:43.510949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.510979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.510988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.511155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.511308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.511315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.511320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.513763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.521934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.522516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.522545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.522554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.522728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.522886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.522893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.522898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.525332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.534657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.535234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.535264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.535273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.535439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.535592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.535599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.535604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.538048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.547363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.547990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.548020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.548029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.548195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.548349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.548355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.548361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.550805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.560114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.560707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.560737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.560749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.560921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.561075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.561081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.561087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.563532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.572848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.573394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.573424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.573433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.573599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.573761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.573769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.573774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.576211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.585522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.586119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.586149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.586158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.586325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.586479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.586485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.586491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.588937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.598245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.598868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.598898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.598907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.599075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.599228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.599234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.599240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.601681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.610982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.611574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.611604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.611616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.611789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.611944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.611950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.008 [2024-10-11 12:02:43.611955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.008 [2024-10-11 12:02:43.614392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.008 [2024-10-11 12:02:43.623707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.008 [2024-10-11 12:02:43.624287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.008 [2024-10-11 12:02:43.624317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.008 [2024-10-11 12:02:43.624326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.008 [2024-10-11 12:02:43.624492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.008 [2024-10-11 12:02:43.624646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.008 [2024-10-11 12:02:43.624652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.009 [2024-10-11 12:02:43.624657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.009 [2024-10-11 12:02:43.627100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.009 [2024-10-11 12:02:43.636447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.009 [2024-10-11 12:02:43.637018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.009 [2024-10-11 12:02:43.637048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.009 [2024-10-11 12:02:43.637057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.009 [2024-10-11 12:02:43.637225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.637379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.637387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.637392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.639837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.271 [2024-10-11 12:02:43.649146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.271 [2024-10-11 12:02:43.649734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.271 [2024-10-11 12:02:43.649769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.271 [2024-10-11 12:02:43.649778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.271 [2024-10-11 12:02:43.649944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.650098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.650107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.650113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.652553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.271 [2024-10-11 12:02:43.661875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.271 [2024-10-11 12:02:43.662439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.271 [2024-10-11 12:02:43.662469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.271 [2024-10-11 12:02:43.662478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.271 [2024-10-11 12:02:43.662647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.662809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.662816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.662822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.665259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.271 [2024-10-11 12:02:43.674568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.271 [2024-10-11 12:02:43.675156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.271 [2024-10-11 12:02:43.675186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.271 [2024-10-11 12:02:43.675195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.271 [2024-10-11 12:02:43.675362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.675516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.675522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.675527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.677969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.271 [2024-10-11 12:02:43.687279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.271 [2024-10-11 12:02:43.687854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.271 [2024-10-11 12:02:43.687884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.271 [2024-10-11 12:02:43.687893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.271 [2024-10-11 12:02:43.688060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.688214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.688220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.688225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.690671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.271 [2024-10-11 12:02:43.699990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.271 [2024-10-11 12:02:43.700563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.271 [2024-10-11 12:02:43.700593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.271 [2024-10-11 12:02:43.700602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.271 [2024-10-11 12:02:43.700775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.700929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.700936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.700942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.703379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.271 [2024-10-11 12:02:43.712710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.271 [2024-10-11 12:02:43.713269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.271 [2024-10-11 12:02:43.713299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.271 [2024-10-11 12:02:43.713308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.271 [2024-10-11 12:02:43.713474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.713628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.713634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.713640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.716083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.271 [2024-10-11 12:02:43.725398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.271 [2024-10-11 12:02:43.725950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.271 [2024-10-11 12:02:43.725980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.271 [2024-10-11 12:02:43.725989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.271 [2024-10-11 12:02:43.726155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.726309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.726315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.726321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.728763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.271 [2024-10-11 12:02:43.738090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.271 [2024-10-11 12:02:43.738686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.271 [2024-10-11 12:02:43.738716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.271 [2024-10-11 12:02:43.738724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.271 [2024-10-11 12:02:43.738894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.739048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.739054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.739059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.741496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.271 [2024-10-11 12:02:43.750811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.271 [2024-10-11 12:02:43.751346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.271 [2024-10-11 12:02:43.751376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.271 [2024-10-11 12:02:43.751385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.271 [2024-10-11 12:02:43.751552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.271 [2024-10-11 12:02:43.751713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.271 [2024-10-11 12:02:43.751720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.271 [2024-10-11 12:02:43.751726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.271 [2024-10-11 12:02:43.754163] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.763492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.764051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.764081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.764090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.764256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.764410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.764416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.764422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.766864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.776172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.776750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.776779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.776788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.776957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.777111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.777117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.777125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.779569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.788884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.789448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.789478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.789487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.789653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.789814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.789821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.789827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.792263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.801579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.802134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.802164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.802173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.802339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.802493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.802499] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.802504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.804946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.814270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.814760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.814791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.814800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.814969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.815123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.815129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.815134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.817577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.826887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.827362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.827379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.827385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.827536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.827693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.827700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.827705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.830135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.839587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.840046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.840059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.840064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.840215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.840365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.840371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.840376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.842828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.852280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.852882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.852912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.852921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.853088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.853241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.853248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.853253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.855696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.865014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.865597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.865627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.865636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.865809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.865968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.865975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.865980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.868414] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.877728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.878293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.878324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.878332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.878499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.878652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.878659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.878664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.272 [2024-10-11 12:02:43.881108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.272 [2024-10-11 12:02:43.890421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.272 [2024-10-11 12:02:43.890982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.272 [2024-10-11 12:02:43.891012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.272 [2024-10-11 12:02:43.891021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.272 [2024-10-11 12:02:43.891187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.272 [2024-10-11 12:02:43.891341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.272 [2024-10-11 12:02:43.891347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.272 [2024-10-11 12:02:43.891352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.273 [2024-10-11 12:02:43.893794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:43.903115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:43.903655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.535 [2024-10-11 12:02:43.903691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.535 [2024-10-11 12:02:43.903700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.535 [2024-10-11 12:02:43.903869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.535 [2024-10-11 12:02:43.904023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.535 [2024-10-11 12:02:43.904029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.535 [2024-10-11 12:02:43.904035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.535 [2024-10-11 12:02:43.906488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:43.915816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:43.916427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.535 [2024-10-11 12:02:43.916457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.535 [2024-10-11 12:02:43.916466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.535 [2024-10-11 12:02:43.916632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.535 [2024-10-11 12:02:43.916794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.535 [2024-10-11 12:02:43.916801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.535 [2024-10-11 12:02:43.916806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.535 [2024-10-11 12:02:43.919242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:43.928556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:43.929083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.535 [2024-10-11 12:02:43.929113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.535 [2024-10-11 12:02:43.929122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.535 [2024-10-11 12:02:43.929288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.535 [2024-10-11 12:02:43.929442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.535 [2024-10-11 12:02:43.929448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.535 [2024-10-11 12:02:43.929454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.535 [2024-10-11 12:02:43.931895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:43.941208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:43.941770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.535 [2024-10-11 12:02:43.941800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.535 [2024-10-11 12:02:43.941809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.535 [2024-10-11 12:02:43.941978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.535 [2024-10-11 12:02:43.942132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.535 [2024-10-11 12:02:43.942139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.535 [2024-10-11 12:02:43.942144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.535 [2024-10-11 12:02:43.944586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:43.953897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:43.954461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.535 [2024-10-11 12:02:43.954491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.535 [2024-10-11 12:02:43.954503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.535 [2024-10-11 12:02:43.954678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.535 [2024-10-11 12:02:43.954832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.535 [2024-10-11 12:02:43.954839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.535 [2024-10-11 12:02:43.954844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.535 [2024-10-11 12:02:43.957281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:43.966597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:43.967176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.535 [2024-10-11 12:02:43.967207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.535 [2024-10-11 12:02:43.967216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.535 [2024-10-11 12:02:43.967382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.535 [2024-10-11 12:02:43.967536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.535 [2024-10-11 12:02:43.967542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.535 [2024-10-11 12:02:43.967547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.535 [2024-10-11 12:02:43.969991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:43.979303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:43.979894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.535 [2024-10-11 12:02:43.979924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.535 [2024-10-11 12:02:43.979934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.535 [2024-10-11 12:02:43.980103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.535 [2024-10-11 12:02:43.980257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.535 [2024-10-11 12:02:43.980263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.535 [2024-10-11 12:02:43.980268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.535 [2024-10-11 12:02:43.982713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:43.992033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:43.992520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.535 [2024-10-11 12:02:43.992534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.535 [2024-10-11 12:02:43.992540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.535 [2024-10-11 12:02:43.992696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.535 [2024-10-11 12:02:43.992848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.535 [2024-10-11 12:02:43.992857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.535 [2024-10-11 12:02:43.992862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.535 [2024-10-11 12:02:43.995293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:44.004747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:44.005326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.535 [2024-10-11 12:02:44.005355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.535 [2024-10-11 12:02:44.005364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.535 [2024-10-11 12:02:44.005531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.535 [2024-10-11 12:02:44.005692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.535 [2024-10-11 12:02:44.005700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.535 [2024-10-11 12:02:44.005705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.535 [2024-10-11 12:02:44.008150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.535 [2024-10-11 12:02:44.017357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.535 [2024-10-11 12:02:44.017963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.017993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.018003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.018169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.018323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.018329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.018335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.020779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.030096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.030574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.030589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.030595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.030751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.030902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.030908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.030913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.033344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.042818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.043431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.043462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.043470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.043637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.043795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.043802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.043808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.046243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.055445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.055919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.055934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.055940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.056091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.056242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.056248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.056253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.058685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.068152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.068641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.068653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.068659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.068815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.068966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.068972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.068977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.071408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.080884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.081436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.081465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.081474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.081648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.081809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.081816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.081822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.084259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.093569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.094138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.094168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.094177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.094344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.094497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.094503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.094509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.096951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.106264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.106807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.106837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.106846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.107024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.107179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.107185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.107191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.109638] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.118965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.119532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.119562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.119570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.119745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.119900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.119906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.119915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.122350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.131657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.132222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.132252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.132261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.132428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.132581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.132587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.132593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.536 [2024-10-11 12:02:44.135037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.536 [2024-10-11 12:02:44.144364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.536 [2024-10-11 12:02:44.144939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.536 [2024-10-11 12:02:44.144969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.536 [2024-10-11 12:02:44.144978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.536 [2024-10-11 12:02:44.145144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.536 [2024-10-11 12:02:44.145298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.536 [2024-10-11 12:02:44.145304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.536 [2024-10-11 12:02:44.145310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.537 [2024-10-11 12:02:44.147752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.537 [2024-10-11 12:02:44.157061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.537 [2024-10-11 12:02:44.157660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.537 [2024-10-11 12:02:44.157696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.537 [2024-10-11 12:02:44.157704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.537 [2024-10-11 12:02:44.157870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.537 [2024-10-11 12:02:44.158024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.537 [2024-10-11 12:02:44.158030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.537 [2024-10-11 12:02:44.158035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.537 [2024-10-11 12:02:44.160472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.799 [2024-10-11 12:02:44.169797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.799 [2024-10-11 12:02:44.170369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-10-11 12:02:44.170403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.799 [2024-10-11 12:02:44.170411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.799 [2024-10-11 12:02:44.170578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.799 [2024-10-11 12:02:44.170739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.799 [2024-10-11 12:02:44.170746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.799 [2024-10-11 12:02:44.170752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.799 [2024-10-11 12:02:44.173188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.799 [2024-10-11 12:02:44.182498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.799 [2024-10-11 12:02:44.182979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-10-11 12:02:44.183008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.799 [2024-10-11 12:02:44.183017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.799 [2024-10-11 12:02:44.183184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.799 [2024-10-11 12:02:44.183337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.799 [2024-10-11 12:02:44.183343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.799 [2024-10-11 12:02:44.183348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1195091 Killed "${NVMF_APP[@]}" "$@" 00:28:59.799 [2024-10-11 12:02:44.185797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1196789 00:28:59.799 [2024-10-11 12:02:44.195123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1196789 00:28:59.799 [2024-10-11 12:02:44.195581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:59.799 [2024-10-11 12:02:44.195596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.799 [2024-10-11 12:02:44.195607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1196789 ']' 00:28:59.799 [2024-10-11 12:02:44.195763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.799 [2024-10-11 12:02:44.195914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.799 [2024-10-11 12:02:44.195924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.799 [2024-10-11 12:02:44.195929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:59.799 12:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.800 [2024-10-11 12:02:44.198363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.207850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.208340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.208352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.208357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.208508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.208658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.208664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.208674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.211104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.220560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.221045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.221075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.221084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.221251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.221405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.221411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.221417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.223860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.233174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.233908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.233939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.233948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.234115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.234272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.234279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.234284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.236741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.245503] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:28:59.800 [2024-10-11 12:02:44.245548] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.800 [2024-10-11 12:02:44.245920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.246466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.246497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.246506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.246679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.246834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.246841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.246847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.249285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.258598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.259110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.259126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.259133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.259284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.259435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.259441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.259446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.261891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.271248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.271591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.271604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.271610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.271765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.271920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.271926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.271932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.274361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.283969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.284459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.284472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.284477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.284629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.284784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.284790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.284795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.287294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.296618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.297205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.297236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.297245] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.297412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.297566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.297573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.297578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.300018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.309347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.309831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.309847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.309853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.310004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.310155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.310161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.310167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.312608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.322075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.800 [2024-10-11 12:02:44.322641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-10-11 12:02:44.322678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.800 [2024-10-11 12:02:44.322687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.800 [2024-10-11 12:02:44.322854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.800 [2024-10-11 12:02:44.323008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.800 [2024-10-11 12:02:44.323014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.800 [2024-10-11 12:02:44.323020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.800 [2024-10-11 12:02:44.325457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.800 [2024-10-11 12:02:44.330399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.801 [2024-10-11 12:02:44.334780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.801 [2024-10-11 12:02:44.335295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-10-11 12:02:44.335310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.801 [2024-10-11 12:02:44.335316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.801 [2024-10-11 12:02:44.335467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.801 [2024-10-11 12:02:44.335618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.801 [2024-10-11 12:02:44.335625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.801 [2024-10-11 12:02:44.335630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.801 [2024-10-11 12:02:44.338075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.801 [2024-10-11 12:02:44.347386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.801 [2024-10-11 12:02:44.347902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-10-11 12:02:44.347915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.801 [2024-10-11 12:02:44.347921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.801 [2024-10-11 12:02:44.348073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.801 [2024-10-11 12:02:44.348223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.801 [2024-10-11 12:02:44.348229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.801 [2024-10-11 12:02:44.348234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.801 [2024-10-11 12:02:44.350664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.801 [2024-10-11 12:02:44.359605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.801 [2024-10-11 12:02:44.359630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.801 [2024-10-11 12:02:44.359640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.801 [2024-10-11 12:02:44.359646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.801 [2024-10-11 12:02:44.359651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.801 [2024-10-11 12:02:44.360128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.801 [2024-10-11 12:02:44.360692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-10-11 12:02:44.360723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.801 [2024-10-11 12:02:44.360733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.801 [2024-10-11 12:02:44.360791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.801 [2024-10-11 12:02:44.360906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.801 [2024-10-11 12:02:44.361060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.801 [2024-10-11 12:02:44.361067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.801 [2024-10-11 12:02:44.361072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.801 [2024-10-11 12:02:44.361111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.801 [2024-10-11 12:02:44.361112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.801 [2024-10-11 12:02:44.363520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.801 [2024-10-11 12:02:44.372847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.801 [2024-10-11 12:02:44.373423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-10-11 12:02:44.373454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.801 [2024-10-11 12:02:44.373463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.801 [2024-10-11 12:02:44.373634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.801 [2024-10-11 12:02:44.373794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.801 [2024-10-11 12:02:44.373802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.801 [2024-10-11 12:02:44.373808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.801 [2024-10-11 12:02:44.376246] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.801 [2024-10-11 12:02:44.385559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.801 [2024-10-11 12:02:44.386168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-10-11 12:02:44.386200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.801 [2024-10-11 12:02:44.386210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.801 [2024-10-11 12:02:44.386379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.801 [2024-10-11 12:02:44.386533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.801 [2024-10-11 12:02:44.386539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.801 [2024-10-11 12:02:44.386546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.801 [2024-10-11 12:02:44.388993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.801 [2024-10-11 12:02:44.398170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.801 [2024-10-11 12:02:44.398769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-10-11 12:02:44.398800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.801 [2024-10-11 12:02:44.398809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.801 [2024-10-11 12:02:44.398980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.801 [2024-10-11 12:02:44.399133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.801 [2024-10-11 12:02:44.399140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.801 [2024-10-11 12:02:44.399145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.801 [2024-10-11 12:02:44.401588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.801 [2024-10-11 12:02:44.410795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.801 [2024-10-11 12:02:44.411402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-10-11 12:02:44.411432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.801 [2024-10-11 12:02:44.411441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.801 [2024-10-11 12:02:44.411609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.801 [2024-10-11 12:02:44.411768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.801 [2024-10-11 12:02:44.411775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.801 [2024-10-11 12:02:44.411781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.801 [2024-10-11 12:02:44.414223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.801 [2024-10-11 12:02:44.423410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.801 [2024-10-11 12:02:44.423967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-10-11 12:02:44.423997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:28:59.801 [2024-10-11 12:02:44.424006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:28:59.801 [2024-10-11 12:02:44.424173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:28:59.801 [2024-10-11 12:02:44.424328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.801 [2024-10-11 12:02:44.424335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.801 [2024-10-11 12:02:44.424340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.801 [2024-10-11 12:02:44.426783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.064 [2024-10-11 12:02:44.436116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.064 [2024-10-11 12:02:44.436611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.064 [2024-10-11 12:02:44.436624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.064 [2024-10-11 12:02:44.436635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.064 [2024-10-11 12:02:44.436790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.064 [2024-10-11 12:02:44.436942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.064 [2024-10-11 12:02:44.436948] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.064 [2024-10-11 12:02:44.436953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.064 [2024-10-11 12:02:44.439386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.064 [2024-10-11 12:02:44.448852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.064 [2024-10-11 12:02:44.449434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.064 [2024-10-11 12:02:44.449465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.064 [2024-10-11 12:02:44.449474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.064 [2024-10-11 12:02:44.449641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.064 [2024-10-11 12:02:44.449803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.064 [2024-10-11 12:02:44.449811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.064 [2024-10-11 12:02:44.449816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.064 [2024-10-11 12:02:44.452255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.064 [2024-10-11 12:02:44.461468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.064 [2024-10-11 12:02:44.461917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.064 [2024-10-11 12:02:44.461947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.064 [2024-10-11 12:02:44.461958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.064 [2024-10-11 12:02:44.462128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.064 [2024-10-11 12:02:44.462282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.064 [2024-10-11 12:02:44.462289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.064 [2024-10-11 12:02:44.462294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.064 [2024-10-11 12:02:44.464735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.064 [2024-10-11 12:02:44.474104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.064 [2024-10-11 12:02:44.474708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.064 [2024-10-11 12:02:44.474739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.064 [2024-10-11 12:02:44.474748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.064 [2024-10-11 12:02:44.474917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.064 [2024-10-11 12:02:44.475071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.064 [2024-10-11 12:02:44.475081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.064 [2024-10-11 12:02:44.475086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.064 [2024-10-11 12:02:44.477529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.064 [2024-10-11 12:02:44.486861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.064 [2024-10-11 12:02:44.487442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.064 [2024-10-11 12:02:44.487472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.064 [2024-10-11 12:02:44.487481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.064 [2024-10-11 12:02:44.487648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.064 [2024-10-11 12:02:44.487809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.064 [2024-10-11 12:02:44.487818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.064 [2024-10-11 12:02:44.487824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.064 [2024-10-11 12:02:44.490262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.064 [2024-10-11 12:02:44.499583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.064 [2024-10-11 12:02:44.499977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.064 [2024-10-11 12:02:44.499993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.064 [2024-10-11 12:02:44.499999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.064 [2024-10-11 12:02:44.500151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.064 [2024-10-11 12:02:44.500302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.064 [2024-10-11 12:02:44.500308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.064 [2024-10-11 12:02:44.500313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.064 [2024-10-11 12:02:44.502750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.064 4913.33 IOPS, 19.19 MiB/s [2024-10-11T10:02:44.696Z] [2024-10-11 12:02:44.513357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.064 [2024-10-11 12:02:44.514001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.064 [2024-10-11 12:02:44.514031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.064 [2024-10-11 12:02:44.514040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.064 [2024-10-11 12:02:44.514207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.064 [2024-10-11 12:02:44.514361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.064 [2024-10-11 12:02:44.514367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.064 [2024-10-11 12:02:44.514373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.064 [2024-10-11 12:02:44.516832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.064 [2024-10-11 12:02:44.526033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.064 [2024-10-11 12:02:44.526508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.064 [2024-10-11 12:02:44.526536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.064 [2024-10-11 12:02:44.526545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.064 [2024-10-11 12:02:44.526717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.064 [2024-10-11 12:02:44.526872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.064 [2024-10-11 12:02:44.526878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.064 [2024-10-11 12:02:44.526884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.064 [2024-10-11 12:02:44.529321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.064 [2024-10-11 12:02:44.538653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.064 [2024-10-11 12:02:44.539198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.064 [2024-10-11 12:02:44.539228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.064 [2024-10-11 12:02:44.539237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.064 [2024-10-11 12:02:44.539404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.064 [2024-10-11 12:02:44.539559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.064 [2024-10-11 12:02:44.539566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.064 [2024-10-11 12:02:44.539571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.542014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.551333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.551930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.551961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.551970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.552139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.552293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.552299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.552304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.554748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.564079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.564593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.564607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.564616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.564772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.564924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.564929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.564934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.567367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.576690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.577303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.577334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.577343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.577510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.577664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.577677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.577683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.580117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.589293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.589815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.589845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.589854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.590023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.590177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.590183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.590189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.592630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.601963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.602527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.602558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.602567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.602740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.602894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.602901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.602910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.605345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.614714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.615197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.615227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.615236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.615403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.615556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.615563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.615568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.618008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.627472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.628013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.628043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.628052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.628219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.628373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.628379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.628384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.630827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.640158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.640646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.640661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.640671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.640823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.640973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.640979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.640984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.643413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.652877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.653132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.653145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.653150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.653302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.653453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.653459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.653464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.655901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.665507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.665971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.666001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.666010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.666177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.666331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.666337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.065 [2024-10-11 12:02:44.666342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.065 [2024-10-11 12:02:44.668788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.065 [2024-10-11 12:02:44.678141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.065 [2024-10-11 12:02:44.678729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.065 [2024-10-11 12:02:44.678759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.065 [2024-10-11 12:02:44.678768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.065 [2024-10-11 12:02:44.678938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.065 [2024-10-11 12:02:44.679092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.065 [2024-10-11 12:02:44.679098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.066 [2024-10-11 12:02:44.679103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.066 [2024-10-11 12:02:44.681545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.066 [2024-10-11 12:02:44.690868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.066 [2024-10-11 12:02:44.691374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.066 [2024-10-11 12:02:44.691388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.066 [2024-10-11 12:02:44.691394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.066 [2024-10-11 12:02:44.691549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.066 [2024-10-11 12:02:44.691704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.066 [2024-10-11 12:02:44.691710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.066 [2024-10-11 12:02:44.691715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.066 [2024-10-11 12:02:44.694148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.328 [2024-10-11 12:02:44.703611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.328 [2024-10-11 12:02:44.704116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.328 [2024-10-11 12:02:44.704129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.328 [2024-10-11 12:02:44.704135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.328 [2024-10-11 12:02:44.704286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.328 [2024-10-11 12:02:44.704437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.328 [2024-10-11 12:02:44.704443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.328 [2024-10-11 12:02:44.704448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.328 [2024-10-11 12:02:44.706882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.328 [2024-10-11 12:02:44.716355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.328 [2024-10-11 12:02:44.716945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.328 [2024-10-11 12:02:44.716976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.328 [2024-10-11 12:02:44.716985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.328 [2024-10-11 12:02:44.717152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.328 [2024-10-11 12:02:44.717306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.328 [2024-10-11 12:02:44.717312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.328 [2024-10-11 12:02:44.717317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.328 [2024-10-11 12:02:44.719760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.328 [2024-10-11 12:02:44.729080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.328 [2024-10-11 12:02:44.729423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.328 [2024-10-11 12:02:44.729438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.328 [2024-10-11 12:02:44.729444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.328 [2024-10-11 12:02:44.729595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.328 [2024-10-11 12:02:44.729751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.328 [2024-10-11 12:02:44.729757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.328 [2024-10-11 12:02:44.729762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.328 [2024-10-11 12:02:44.732206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.328 [2024-10-11 12:02:44.741822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.328 [2024-10-11 12:02:44.742302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.328 [2024-10-11 12:02:44.742333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.328 [2024-10-11 12:02:44.742342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.742509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.742663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.742675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.742680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.745120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.754447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.754918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.754933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.754939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.755091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.755241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.755247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.755252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.757687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.767150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.767646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.767659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.767664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.767819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.767970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.767976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.767981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.770410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.779867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.780465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.780499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.780509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.780682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.780836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.780843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.780848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.783282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.792603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.793228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.793259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.793268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.793435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.793590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.793597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.793603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.796045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.805228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.805733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.805748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.805754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.805906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.806057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.806063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.806068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.808499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.817836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.818337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.818349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.818355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.818506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.818660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.818671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.818676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.821109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.830563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.831048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.831060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.831066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.831216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.831367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.831373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.831378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.833810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.843323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.843903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.843933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.843942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.844109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.844264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.844270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.844275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.846717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.856033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.856535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.856550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.856556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.856711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.856863] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.856868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.856873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.859303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.868775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.329 [2024-10-11 12:02:44.869097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.329 [2024-10-11 12:02:44.869110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.329 [2024-10-11 12:02:44.869115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.329 [2024-10-11 12:02:44.869266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.329 [2024-10-11 12:02:44.869416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.329 [2024-10-11 12:02:44.869422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.329 [2024-10-11 12:02:44.869427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.329 [2024-10-11 12:02:44.871862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.329 [2024-10-11 12:02:44.881465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.330 [2024-10-11 12:02:44.882006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.330 [2024-10-11 12:02:44.882037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.330 [2024-10-11 12:02:44.882046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.330 [2024-10-11 12:02:44.882212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.330 [2024-10-11 12:02:44.882366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.330 [2024-10-11 12:02:44.882373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.330 [2024-10-11 12:02:44.882379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.330 [2024-10-11 12:02:44.884823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.330 [2024-10-11 12:02:44.894147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.330 [2024-10-11 12:02:44.894642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.330 [2024-10-11 12:02:44.894679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.330 [2024-10-11 12:02:44.894688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.330 [2024-10-11 12:02:44.894855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.330 [2024-10-11 12:02:44.895008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.330 [2024-10-11 12:02:44.895015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.330 [2024-10-11 12:02:44.895020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.330 [2024-10-11 12:02:44.897458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.330 [2024-10-11 12:02:44.906785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.330 [2024-10-11 12:02:44.907281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.330 [2024-10-11 12:02:44.907296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.330 [2024-10-11 12:02:44.907305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.330 [2024-10-11 12:02:44.907457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.330 [2024-10-11 12:02:44.907608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.330 [2024-10-11 12:02:44.907613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.330 [2024-10-11 12:02:44.907618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.330 [2024-10-11 12:02:44.910067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.330 [2024-10-11 12:02:44.919396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.330 [2024-10-11 12:02:44.919779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.330 [2024-10-11 12:02:44.919810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.330 [2024-10-11 12:02:44.919819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.330 [2024-10-11 12:02:44.919986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.330 [2024-10-11 12:02:44.920141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.330 [2024-10-11 12:02:44.920148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.330 [2024-10-11 12:02:44.920153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.330 [2024-10-11 12:02:44.922595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.330 [2024-10-11 12:02:44.932070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.330 [2024-10-11 12:02:44.932651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.330 [2024-10-11 12:02:44.932688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.330 [2024-10-11 12:02:44.932697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.330 [2024-10-11 12:02:44.932864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.330 [2024-10-11 12:02:44.933018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.330 [2024-10-11 12:02:44.933025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.330 [2024-10-11 12:02:44.933030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.330 [2024-10-11 12:02:44.935468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.330 [2024-10-11 12:02:44.944810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.330 [2024-10-11 12:02:44.945393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.330 [2024-10-11 12:02:44.945423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.330 [2024-10-11 12:02:44.945433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.330 [2024-10-11 12:02:44.945599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.330 [2024-10-11 12:02:44.945759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.330 [2024-10-11 12:02:44.945765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.330 [2024-10-11 12:02:44.945775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.330 [2024-10-11 12:02:44.948211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.330 [2024-10-11 12:02:44.957538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.330 [2024-10-11 12:02:44.957996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.330 [2024-10-11 12:02:44.958010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.330 [2024-10-11 12:02:44.958016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.330 [2024-10-11 12:02:44.958167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.592 [2024-10-11 12:02:44.958318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.592 [2024-10-11 12:02:44.958325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.592 [2024-10-11 12:02:44.958331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.592 [2024-10-11 12:02:44.960768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.592 [2024-10-11 12:02:44.970233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.592 [2024-10-11 12:02:44.970710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.592 [2024-10-11 12:02:44.970740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.592 [2024-10-11 12:02:44.970750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.592 [2024-10-11 12:02:44.970920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.592 [2024-10-11 12:02:44.971074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.592 [2024-10-11 12:02:44.971081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.592 [2024-10-11 12:02:44.971086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.592 [2024-10-11 12:02:44.973530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.592 [2024-10-11 12:02:44.982863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.592 [2024-10-11 12:02:44.983442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.592 [2024-10-11 12:02:44.983473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.592 [2024-10-11 12:02:44.983482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.592 [2024-10-11 12:02:44.983650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.592 [2024-10-11 12:02:44.983811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.592 [2024-10-11 12:02:44.983818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.592 [2024-10-11 12:02:44.983824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.592 [2024-10-11 12:02:44.986261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.592 [2024-10-11 12:02:44.995583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.592 [2024-10-11 12:02:44.996043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.592 [2024-10-11 12:02:44.996058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.592 [2024-10-11 12:02:44.996064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.592 [2024-10-11 12:02:44.996215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.592 [2024-10-11 12:02:44.996366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.592 [2024-10-11 12:02:44.996373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.592 [2024-10-11 12:02:44.996378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.592 [2024-10-11 12:02:44.998816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.592 [2024-10-11 12:02:45.008284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.592 [2024-10-11 12:02:45.008636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.592 [2024-10-11 12:02:45.008649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.592 [2024-10-11 12:02:45.008654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.592 [2024-10-11 12:02:45.008808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.592 [2024-10-11 12:02:45.008960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.592 [2024-10-11 12:02:45.008967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.592 [2024-10-11 12:02:45.008973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.592 [2024-10-11 12:02:45.011410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.592 [2024-10-11 12:02:45.021022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.592 [2024-10-11 12:02:45.021414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.592 [2024-10-11 12:02:45.021426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.592 [2024-10-11 12:02:45.021432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.592 [2024-10-11 12:02:45.021582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.592 [2024-10-11 12:02:45.021737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.592 [2024-10-11 12:02:45.021743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.592 [2024-10-11 12:02:45.021748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.592 [2024-10-11 12:02:45.024179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.592 [2024-10-11 12:02:45.033634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.592 [2024-10-11 12:02:45.034084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.592 [2024-10-11 12:02:45.034096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.592 [2024-10-11 12:02:45.034101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.592 [2024-10-11 12:02:45.034256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.592 [2024-10-11 12:02:45.034407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.592 [2024-10-11 12:02:45.034412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.592 [2024-10-11 12:02:45.034417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.592 [2024-10-11 12:02:45.036866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.592 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:00.592 [2024-10-11 12:02:45.046258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.592 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:00.592 [2024-10-11 12:02:45.046719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.592 [2024-10-11 12:02:45.046733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.592 [2024-10-11 12:02:45.046740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.592 [2024-10-11 12:02:45.046892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.592 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:00.592 [2024-10-11 12:02:45.047043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.592 [2024-10-11 12:02:45.047050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.592 [2024-10-11 12:02:45.047055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.592 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:00.592 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.592 [2024-10-11 12:02:45.049485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.592 [2024-10-11 12:02:45.058951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.592 [2024-10-11 12:02:45.059488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.592 [2024-10-11 12:02:45.059519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.592 [2024-10-11 12:02:45.059528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.592 [2024-10-11 12:02:45.059701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.592 [2024-10-11 12:02:45.059857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.592 [2024-10-11 12:02:45.059863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.592 [2024-10-11 12:02:45.059869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.592 [2024-10-11 12:02:45.062308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.592 [2024-10-11 12:02:45.071646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.593 [2024-10-11 12:02:45.072123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.593 [2024-10-11 12:02:45.072139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.593 [2024-10-11 12:02:45.072146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.593 [2024-10-11 12:02:45.072302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.593 [2024-10-11 12:02:45.072454] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.593 [2024-10-11 12:02:45.072460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.593 [2024-10-11 12:02:45.072465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.593 [2024-10-11 12:02:45.074903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.593 [2024-10-11 12:02:45.084365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.593 [2024-10-11 12:02:45.084753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.593 [2024-10-11 12:02:45.084766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.593 [2024-10-11 12:02:45.084772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.593 [2024-10-11 12:02:45.084922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.593 [2024-10-11 12:02:45.085073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.593 [2024-10-11 12:02:45.085079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.593 [2024-10-11 12:02:45.085085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.593 [2024-10-11 12:02:45.087516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.593 [2024-10-11 12:02:45.092799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.593 [2024-10-11 12:02:45.097005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.593 [2024-10-11 12:02:45.097556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.593 [2024-10-11 12:02:45.097587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.593 [2024-10-11 12:02:45.097595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.593 [2024-10-11 12:02:45.097771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.593 [2024-10-11 12:02:45.097925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.593 [2024-10-11 12:02:45.097932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.593 [2024-10-11 12:02:45.097938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.593 [2024-10-11 12:02:45.100374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.593 [2024-10-11 12:02:45.109703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.593 [2024-10-11 12:02:45.110173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.593 [2024-10-11 12:02:45.110203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.593 [2024-10-11 12:02:45.110213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.593 [2024-10-11 12:02:45.110380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.593 [2024-10-11 12:02:45.110534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.593 [2024-10-11 12:02:45.110541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.593 [2024-10-11 12:02:45.110546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.593 [2024-10-11 12:02:45.112994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.593 [2024-10-11 12:02:45.122322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.593 [2024-10-11 12:02:45.122829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.593 [2024-10-11 12:02:45.122845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.593 [2024-10-11 12:02:45.122851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.593 [2024-10-11 12:02:45.123002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.593 [2024-10-11 12:02:45.123153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.593 [2024-10-11 12:02:45.123159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.593 [2024-10-11 12:02:45.123164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.593 [2024-10-11 12:02:45.125594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.593 Malloc0 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.593 [2024-10-11 12:02:45.135054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.593 [2024-10-11 12:02:45.135505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.593 [2024-10-11 12:02:45.135516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.593 [2024-10-11 12:02:45.135522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.593 [2024-10-11 12:02:45.135685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.593 [2024-10-11 12:02:45.135836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.593 [2024-10-11 12:02:45.135842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.593 [2024-10-11 12:02:45.135847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.593 [2024-10-11 12:02:45.138278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.593 [2024-10-11 12:02:45.147739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.593 [2024-10-11 12:02:45.148236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.593 [2024-10-11 12:02:45.148248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb0f500 with addr=10.0.0.2, port=4420 00:29:00.593 [2024-10-11 12:02:45.148253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0f500 is same with the state(6) to be set 00:29:00.593 [2024-10-11 12:02:45.148404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0f500 (9): Bad file descriptor 00:29:00.593 [2024-10-11 12:02:45.148554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:00.593 [2024-10-11 12:02:45.148560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:00.593 [2024-10-11 12:02:45.148566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:00.593 [2024-10-11 12:02:45.151001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.593 [2024-10-11 12:02:45.160270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.593 [2024-10-11 12:02:45.160451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.593 12:02:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1195773 00:29:00.593 [2024-10-11 12:02:45.195027] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:02.104 4813.14 IOPS, 18.80 MiB/s [2024-10-11T10:02:47.681Z] 5835.50 IOPS, 22.79 MiB/s [2024-10-11T10:02:48.625Z] 6646.33 IOPS, 25.96 MiB/s [2024-10-11T10:02:49.564Z] 7297.70 IOPS, 28.51 MiB/s [2024-10-11T10:02:50.948Z] 7824.55 IOPS, 30.56 MiB/s [2024-10-11T10:02:51.888Z] 8257.25 IOPS, 32.25 MiB/s [2024-10-11T10:02:52.830Z] 8643.62 IOPS, 33.76 MiB/s [2024-10-11T10:02:53.771Z] 8967.29 IOPS, 35.03 MiB/s 00:29:09.139 Latency(us) 00:29:09.139 [2024-10-11T10:02:53.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.139 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:09.139 Verification LBA range: start 0x0 length 0x4000 00:29:09.139 Nvme1n1 : 15.01 9249.58 36.13 13215.49 0.00 5678.72 370.35 16493.23 00:29:09.139 [2024-10-11T10:02:53.771Z] =================================================================================================================== 00:29:09.139 [2024-10-11T10:02:53.771Z] Total : 9249.58 36.13 13215.49 0.00 5678.72 370.35 16493.23 00:29:09.139 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:09.139 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.140 rmmod nvme_tcp 00:29:09.140 rmmod nvme_fabrics 00:29:09.140 rmmod nvme_keyring 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1196789 ']' 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1196789 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1196789 ']' 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1196789 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:09.140 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1196789 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1196789' 00:29:09.401 killing process with pid 1196789 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1196789 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1196789 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.401 12:02:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.946 12:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.946 00:29:11.946 real 0m28.230s 00:29:11.946 user 1m2.763s 00:29:11.946 sys 0m7.955s 00:29:11.946 12:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:11.946 12:02:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:11.946 ************************************ 00:29:11.946 END TEST nvmf_bdevperf 00:29:11.946 ************************************ 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.946 ************************************ 00:29:11.946 START TEST nvmf_target_disconnect 00:29:11.946 ************************************ 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:11.946 * Looking for test storage... 00:29:11.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:11.946 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:11.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.947 --rc genhtml_branch_coverage=1 00:29:11.947 --rc genhtml_function_coverage=1 00:29:11.947 --rc genhtml_legend=1 00:29:11.947 --rc geninfo_all_blocks=1 00:29:11.947 --rc geninfo_unexecuted_blocks=1 00:29:11.947 00:29:11.947 ' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:11.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.947 --rc genhtml_branch_coverage=1 00:29:11.947 --rc genhtml_function_coverage=1 00:29:11.947 --rc genhtml_legend=1 00:29:11.947 --rc geninfo_all_blocks=1 00:29:11.947 --rc geninfo_unexecuted_blocks=1 00:29:11.947 00:29:11.947 ' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:11.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.947 --rc genhtml_branch_coverage=1 00:29:11.947 --rc genhtml_function_coverage=1 00:29:11.947 --rc genhtml_legend=1 00:29:11.947 --rc geninfo_all_blocks=1 00:29:11.947 --rc geninfo_unexecuted_blocks=1 00:29:11.947 00:29:11.947 ' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:11.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.947 --rc genhtml_branch_coverage=1 00:29:11.947 --rc genhtml_function_coverage=1 00:29:11.947 --rc genhtml_legend=1 00:29:11.947 --rc geninfo_all_blocks=1 00:29:11.947 --rc geninfo_unexecuted_blocks=1 00:29:11.947 00:29:11.947 ' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.947 12:02:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:20.088 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.088 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:20.089 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:20.089 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:20.089 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:20.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:29:20.089 00:29:20.089 --- 10.0.0.2 ping statistics --- 00:29:20.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.089 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:29:20.089 00:29:20.089 --- 10.0.0.1 ping statistics --- 00:29:20.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.089 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:20.089 ************************************ 00:29:20.089 START TEST nvmf_target_disconnect_tc1 00:29:20.089 ************************************ 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:20.089 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:20.089 [2024-10-11 12:03:03.951204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.089 [2024-10-11 12:03:03.951293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeee670 with addr=10.0.0.2, port=4420 00:29:20.090 [2024-10-11 12:03:03.951328] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:20.090 [2024-10-11 12:03:03.951340] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:20.090 [2024-10-11 12:03:03.951348] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:20.090 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:20.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:20.090 Initializing NVMe Controllers 00:29:20.090 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:20.090 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:20.090 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:20.090 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:20.090 00:29:20.090 real 0m0.134s 00:29:20.090 user 0m0.051s 00:29:20.090 sys 0m0.081s 00:29:20.090 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.090 12:03:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:20.090 ************************************ 00:29:20.090 END TEST nvmf_target_disconnect_tc1 00:29:20.090 ************************************ 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:20.090 ************************************ 00:29:20.090 START TEST nvmf_target_disconnect_tc2 00:29:20.090 ************************************ 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1202945 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1202945 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1202945 ']' 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.090 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.090 [2024-10-11 12:03:04.114651] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:29:20.090 [2024-10-11 12:03:04.114723] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.090 [2024-10-11 12:03:04.205445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.090 [2024-10-11 12:03:04.258485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.090 [2024-10-11 12:03:04.258540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.090 [2024-10-11 12:03:04.258548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.090 [2024-10-11 12:03:04.258555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.090 [2024-10-11 12:03:04.258562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.090 [2024-10-11 12:03:04.260647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:20.090 [2024-10-11 12:03:04.260812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:20.090 [2024-10-11 12:03:04.261293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:20.090 [2024-10-11 12:03:04.261297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:20.351 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:20.351 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:20.351 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:20.351 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.351 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.612 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.612 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:20.612 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.612 12:03:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.612 Malloc0 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.612 [2024-10-11 12:03:05.025254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.612 [2024-10-11 12:03:05.065626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.612 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.613 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:20.613 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.613 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.613 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.613 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1203290 00:29:20.613 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:20.613 12:03:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:22.528 12:03:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1202945 00:29:22.528 12:03:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Write completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Write completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Write completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Write completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Write completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Write completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Write completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Write completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Write completed with error (sct=0, sc=8) 00:29:22.528 starting I/O failed 00:29:22.528 Read completed with error (sct=0, sc=8) 00:29:22.529 starting I/O failed 00:29:22.529 Read completed with error (sct=0, sc=8) 00:29:22.529 starting I/O failed 00:29:22.529 Read completed with error (sct=0, sc=8) 00:29:22.529 starting I/O failed 00:29:22.529 Read completed with error (sct=0, sc=8) 00:29:22.529 starting I/O failed 00:29:22.529 Read completed with error (sct=0, sc=8) 00:29:22.529 starting I/O failed 00:29:22.529 Write completed with error (sct=0, sc=8) 00:29:22.529 starting I/O failed 00:29:22.529 Write completed with error (sct=0, sc=8) 00:29:22.529 starting I/O failed 00:29:22.529 Read completed with error (sct=0, sc=8) 00:29:22.529 starting I/O failed 00:29:22.529 [2024-10-11 12:03:07.103587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.529 [2024-10-11 12:03:07.104204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.104270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.104642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.104651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.105192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.105248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.105585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.105596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.105695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.105708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.106182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.106238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.106573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.106583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.107085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.107142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.107495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.107506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.107945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.108003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.108353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.108370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.108567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.108577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.108897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.108906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.109285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.109294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.109519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.109529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.109933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.109943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.110293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.110302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.110598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.110606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.110933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.110942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.111290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.111298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.111643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.111652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.112007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.112016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.112320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.112329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.112497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.112507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.112882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.112891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.113096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.113105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.113203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.113210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.113546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.113554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.113914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.113923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.114156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.114164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.114463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.114471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.114699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.114709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.115087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.115095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.115386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.115395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.115742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.115751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.116025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.116033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.116350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.529 [2024-10-11 12:03:07.116359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.529 qpair failed and we were unable to recover it. 00:29:22.529 [2024-10-11 12:03:07.116542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.116553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.116866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.116875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.117166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.117175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.117472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.117480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.117771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.117779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.117974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.117984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.118163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.118172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.118475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.118484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.118915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.118924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.119210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.119219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.119402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.119412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.119616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.119624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.119938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.119945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.120331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.120338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.120678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.120686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.120993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.121000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.121341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.121348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.121519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.121527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.121879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.121886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.122189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.122196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.122559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.122567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.122857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.122865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.123180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.123187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.123356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.123364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.123688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.123697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.123911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.123919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.124255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.124262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.124547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.124557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.124914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.124922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.125228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.125235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.125522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.125529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.125727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.125735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.126103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.126110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.126418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.126425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.126769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.126777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.127079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.127087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.127389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.127396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.127707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.127715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.127994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.128001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.128318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.530 [2024-10-11 12:03:07.128325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.530 qpair failed and we were unable to recover it. 00:29:22.530 [2024-10-11 12:03:07.128678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.128686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.129041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.129050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.129350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.129358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.129666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.129684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.129975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.129984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.130375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.130382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.130591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.130598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.130909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.130916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.131154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.131163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.131489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.131497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.131739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.131747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.132107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.132115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.132428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.132436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.132759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.132766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.133070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.133078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.133378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.133385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.133692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.133700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.133971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.133978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.134341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.134348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.134661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.134684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.135013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.135022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.135320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.135328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.135657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.135665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.135896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.135903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.136135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.136143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.136474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.136481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.136787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.136795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.137092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.137099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.137425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.137432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.137743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.137752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.138103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.138111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.138407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.138415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.138721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.138729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.139066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.139075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.139432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.139441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.139648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.139656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.531 [2024-10-11 12:03:07.140006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.531 [2024-10-11 12:03:07.140015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.531 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.140356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.140365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.140585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.140592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.140902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.140910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.141210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.141217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.141543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.141551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.141783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.141792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.142143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.142150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.142459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.142466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.142777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.142784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.143092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.143100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.143306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.143314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.143623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.143631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.143927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.143935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.144326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.144335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.144660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.144675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.144990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.144998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.145298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.145306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.145519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.145533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.145845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.145856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.146195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.146203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.146551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.146558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.146932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.146940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.147277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.147285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.147641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.147650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.147830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.147839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.148142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.148149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.148342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.148349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.148699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.148706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.149098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.149106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.149410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.149417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.149760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.149768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.150089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.150096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.150298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.150306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.150632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.150640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.150991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.151000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.151296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.151303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.151607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.151614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.151928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.151936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.152255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.152263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.152563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.532 [2024-10-11 12:03:07.152571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.532 qpair failed and we were unable to recover it. 00:29:22.532 [2024-10-11 12:03:07.152913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.152921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.153238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.153246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.153555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.153563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.153836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.153845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.154176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.154184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.154553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.154564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.154887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.154896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.155201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.155209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.155403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.155412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.155721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.155729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.156068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.156076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.156387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.156394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.156714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.156721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.157011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.157019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.533 [2024-10-11 12:03:07.157224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.533 [2024-10-11 12:03:07.157232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.533 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.157464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.157475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.157720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.157728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.158040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.158050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.158374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.158382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.158716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.158724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.158939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.158948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.159252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.159259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.159573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.159580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.159861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.159868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.160178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.160185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.160514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.160521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.160827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.160835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.161139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.161146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.161449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.161457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.161781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.161788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.161995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.162002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.162408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.162415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.162727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.162735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.163034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.163041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.163356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.163364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.163676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.163683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.163888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.163896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.164076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.164084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.164389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.164397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.164772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.164780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.165135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.165143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.165470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.165478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.165788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.165796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.808 qpair failed and we were unable to recover it. 00:29:22.808 [2024-10-11 12:03:07.166131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.808 [2024-10-11 12:03:07.166139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.166336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.166343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.166640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.166649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.166940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.166948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.167262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.167269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.167600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.167609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.167931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.167938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.168258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.168266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.168626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.168633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.168960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.168968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.169298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.169305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.169628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.169636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.169852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.169859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.170243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.170251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.170632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.170639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.170856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.170864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.171181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.171188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.171523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.171531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.171869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.171876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.172199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.172207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.172409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.172417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.172694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.172703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.173044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.173051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.173373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.173381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.173705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.173713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.173973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.173981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.174306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.174313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.174635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.174642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.174976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.174983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.175167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.175175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.175465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.175476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.175803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.175811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.176140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.176147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.176337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.176344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.176679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.176687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.177068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.177075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.177380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.177388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.177718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.177726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.178040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.178048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.178367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.809 [2024-10-11 12:03:07.178374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.809 qpair failed and we were unable to recover it. 00:29:22.809 [2024-10-11 12:03:07.178705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.178713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.179033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.179040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.179363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.179371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.179695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.179703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.179991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.179999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.180325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.180332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.180657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.180665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.181003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.181011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.181283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.181292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.181614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.181623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.181834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.181842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.182163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.182170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.182571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.182580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.182901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.182910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.183231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.183239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.183564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.183572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.183862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.183871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.184191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.184202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.184523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.184531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.184840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.184848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.185013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.185021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.185361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.185369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.185690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.185697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.186002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.186009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.186335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.186342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.186676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.186684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.187003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.187010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.187339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.187346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.187666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.187677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.188077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.188085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.188452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.188459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.188649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.188657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.189027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.189034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.189366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.189374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.189548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.189557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.189853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.189861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.190188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.190196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.190521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.190528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.190841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.190849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.191171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.810 [2024-10-11 12:03:07.191178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.810 qpair failed and we were unable to recover it. 00:29:22.810 [2024-10-11 12:03:07.191524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.191532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.191840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.191847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.192178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.192186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.192511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.192518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.192766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.192776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.193109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.193116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.193429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.193437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.193738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.193745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.194080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.194088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.194448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.194455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.194771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.194779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.195104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.195111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.195515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.195523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.195850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.195858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.196187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.196194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.196408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.196416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.196767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.196775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.196966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.196975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.197317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.197324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.197623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.197631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.197981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.197988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.198309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.198317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.198639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.198647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.198975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.198982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.199177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.199185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.199554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.199563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.199896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.199905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.200239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.200247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.200529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.200538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.200930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.200940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.201258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.201266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.201590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.201598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.201914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.201923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.202250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.202259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.202581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.202589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.202897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.202906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.203225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.203234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.203544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.203553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.203841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.203849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.204172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.204181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.811 qpair failed and we were unable to recover it. 00:29:22.811 [2024-10-11 12:03:07.204501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.811 [2024-10-11 12:03:07.204509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.204836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.204845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.205199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.205207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.205515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.205524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.205717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.205727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.206060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.206067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.206384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.206392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.206580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.206587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.206904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.206912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.207107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.207115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.207295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.207303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.207614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.207621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.207908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.207916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.208204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.208211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.208534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.208541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.208888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.208895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.209220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.209228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.209538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.209545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.209771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.209778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.210096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.210103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.210427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.210435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.210632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.210639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.210928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.210936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.211260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.211268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.211586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.211594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.211912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.211920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.212096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.212104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.212464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.212473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.212795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.212802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.213109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.213116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.213434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.213441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.213870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.213877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.214205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.812 [2024-10-11 12:03:07.214214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.812 qpair failed and we were unable to recover it. 00:29:22.812 [2024-10-11 12:03:07.214514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.214521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.214843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.214850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.215165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.215173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.215499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.215506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.215858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.215867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.216191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.216198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.216502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.216510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.216833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.216840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.217158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.217166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.217491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.217498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.217819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.217827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.218155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.218162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.218482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.218489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.218806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.218813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.219122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.219130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.219451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.219459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.219737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.219745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.220069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.220076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.220274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.220281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.220448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.220457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.220781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.220788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.221017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.221024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.221314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.221323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.221659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.221675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.221965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.221972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.222321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.222330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.222649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.222660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.222981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.222989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.223307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.223316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.223360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.223368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.223709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.223717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.224095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.224103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.224331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.224340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.224660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.224672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.224989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.224996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.225298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.225305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.225597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.225605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.225825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.225833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.226173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.226180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.226509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.813 [2024-10-11 12:03:07.226516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.813 qpair failed and we were unable to recover it. 00:29:22.813 [2024-10-11 12:03:07.226835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.226843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.227175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.227182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.227505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.227512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.227872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.227879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.228182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.228189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.228508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.228515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.228838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.228846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.229057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.229064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.229349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.229357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.229681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.229688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.229994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.230001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.230202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.230210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.230461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.230470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.230793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.230800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.230987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.230995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.231354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.231361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.231702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.231710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.232034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.232041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.232369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.232377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.232709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.232716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.233033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.233042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.233359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.233366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.233687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.233695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.234003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.234010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.234339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.234347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.234678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.234686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.235016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.235024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.235352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.235361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.235682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.235690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.236008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.236015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.236333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.236340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.236739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.236747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.237015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.237022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.237242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.237250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.237571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.237578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.237857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.237864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.238189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.238196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.238515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.238523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.238841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.238849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.239172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.814 [2024-10-11 12:03:07.239179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.814 qpair failed and we were unable to recover it. 00:29:22.814 [2024-10-11 12:03:07.239501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.239508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.239827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.239835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.240038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.240047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.240367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.240374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.240675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.240683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.241000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.241007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.241330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.241338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.241739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.241747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.242077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.242085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.242407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.242414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.242720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.242728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.242909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.242917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.243346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.243353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.243653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.243661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.243985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.243995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.244314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.244322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.244640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.244649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.244967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.244974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.245297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.245305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.245626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.245634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.245943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.245951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.246281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.246288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.246607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.246615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.246800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.246809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.246976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.246986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.247202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.247210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.247529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.247538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.247854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.247862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.248196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.248204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.248523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.248530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.248843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.248851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.249051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.249059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.249399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.249406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.249713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.249721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.250009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.250017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.250328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.250344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.250657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.250664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.250991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.251000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.251323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.251330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.251519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.815 [2024-10-11 12:03:07.251527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.815 qpair failed and we were unable to recover it. 00:29:22.815 [2024-10-11 12:03:07.251891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.251899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.252084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.252095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.252483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.252490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.252825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.252834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.253017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.253026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.253322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.253329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.253634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.253642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.253950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.253957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.254281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.254288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.254615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.254622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.254940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.254948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.255275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.255282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.255593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.255600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.255936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.255944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.256243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.256251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.256594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.256602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.256930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.256938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.257100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.257108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.257400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.257407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.257726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.257748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.258017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.258024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.258356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.258364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.258702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.258710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.259009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.259017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.259340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.259347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.259676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.259684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.259975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.259982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.260285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.260292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.260493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.260504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.260804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.260812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.261003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.261010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.261106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.261114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.261410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.261417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.261738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.261746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.262116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.262123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.262423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.262430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.262754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.262762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.263156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.263165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.263341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.263349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.263684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.263691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.816 [2024-10-11 12:03:07.263895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.816 [2024-10-11 12:03:07.263903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.816 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.264207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.264214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.264418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.264426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.264764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.264771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.265095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.265102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.265403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.265410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.265586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.265594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.265918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.265926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.266283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.266292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.266418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.266425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.266735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.266742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.267074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.267081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.267248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.267256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.267585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.267592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.267918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.267926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.268254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.268261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.268581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.268589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.268915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.268922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.269244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.269252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.269575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.269582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.269915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.269922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.270246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.270253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.270558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.270565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.270898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.270907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.271141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.271149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.271481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.271489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.271817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.271825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.272126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.272134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.272347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.272354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.272580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.272587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.272974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.272981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.273394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.273402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.273633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.273640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.273987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.273995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.274317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.274324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.274646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.274654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.817 qpair failed and we were unable to recover it. 00:29:22.817 [2024-10-11 12:03:07.275054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.817 [2024-10-11 12:03:07.275061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.275361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.275368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.275689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.275696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.276021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.276028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.276352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.276359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.276679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.276687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.276892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.276899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.277188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.277195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.277526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.277533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.277845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.277854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.278188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.278195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.278489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.278496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.278832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.278840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.279210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.279217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.279519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.279527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.279842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.279849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.280178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.280186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.280522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.280529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.280845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.280853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.281181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.281188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.281511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.281521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.281861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.281870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.282195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.282203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.282527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.282536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.282848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.282857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.283155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.283163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.283495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.283503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.283861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.283870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.284055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.284064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.284406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.284413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.284738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.284745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.285035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.285042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.285368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.285376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.285701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.285709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.286029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.286036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.286375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.286382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.286556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.286564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.286957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.286965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.287298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.287306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.287636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.818 [2024-10-11 12:03:07.287643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.818 qpair failed and we were unable to recover it. 00:29:22.818 [2024-10-11 12:03:07.288048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.288057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.288418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.288425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.288730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.288738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.289070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.289077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.289407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.289415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.289741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.289748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.289935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.289943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.290235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.290245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.290520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.290527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.290933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.290942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.291259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.291266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.291649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.291656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.291983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.291990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.292308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.292315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.292616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.292624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.292956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.292963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.293295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.293303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.293620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.293628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.293926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.293933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.294235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.294242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.294447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.294456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.294736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.294744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.295092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.295099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.295415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.295422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.295742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.295749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.295965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.295972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.296255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.296262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.296635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.296644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.296964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.296971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.297279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.297287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.297455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.297464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.297775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.297782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.298157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.298165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.298410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.298417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.298745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.298753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.299118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.299126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.299329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.299337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.299665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.299677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.299996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.300003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.300329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.300336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.819 [2024-10-11 12:03:07.300661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.819 [2024-10-11 12:03:07.300672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.819 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.300841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.300848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.301077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.301086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.301461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.301469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.301790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.301798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.302132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.302140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.302509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.302516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.302812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.302820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.303143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.303151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.303537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.303545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.303883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.303890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.304198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.304206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.304534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.304541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.304897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.304904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.305227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.305234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.305553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.305560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.305851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.305859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.306161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.306169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.306490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.306498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.306826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.306834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.307151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.307157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.307339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.307346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.307583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.307590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.307888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.307895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.308251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.308258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.308595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.308602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.309007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.309015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.309332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.309340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.309676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.309684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.309998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.310005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.310089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.310096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.310441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.310449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.310735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.310743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.311080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.311087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.311387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.311394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.311709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.311719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.312050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.312057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.312273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.312280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.312635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.312642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.312876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.312883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.313059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.820 [2024-10-11 12:03:07.313067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.820 qpair failed and we were unable to recover it. 00:29:22.820 [2024-10-11 12:03:07.313391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.313398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.313613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.313620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.313960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.313969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.314296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.314304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.314627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.314636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.314932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.314941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.315264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.315271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.315475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.315483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.315843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.315851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.316178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.316186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.316376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.316384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.316725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.316732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.317079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.317088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.317423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.317432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.317741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.317748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.318080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.318088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.318407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.318414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.318743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.318751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.319107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.319114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.319415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.319422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.319753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.319761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.320071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.320080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.320320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.320327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.320661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.320674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.320997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.321004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.321306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.321314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.321645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.321653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.321970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.321979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.322302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.322310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.322640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.322647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.322973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.322982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.323196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.323204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.323478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.323485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.323829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.323837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.324177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.821 [2024-10-11 12:03:07.324184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.821 qpair failed and we were unable to recover it. 00:29:22.821 [2024-10-11 12:03:07.324511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.324519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.324842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.324850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.325166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.325173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.325419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.325428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.325738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.325745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.326109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.326116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.326210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.326217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.326536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.326544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.326864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.326872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.327078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.327085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.327427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.327434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.327778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.327787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.328103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.328110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.328527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.328536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.328851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.328858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.329195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.329203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.329537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.329545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.329895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.329903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.330303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.330311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.330629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.330636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.330961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.330969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.331281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.331288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.331608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.331615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.331955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.331962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.332297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.332304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.332627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.332634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.332977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.332985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.333301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.333309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.333613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.333621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.333929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.333937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.334240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.334248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.334565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.334572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.334899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.334907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.335235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.335242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.335571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.335578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.335904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.335912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.336234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.336242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.336581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.336589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.336797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.336806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.337077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.337084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.822 qpair failed and we were unable to recover it. 00:29:22.822 [2024-10-11 12:03:07.337399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.822 [2024-10-11 12:03:07.337406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.337730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.337737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.337967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.337974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.338324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.338332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.338652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.338661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.338985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.338993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.339315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.339323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.339489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.339498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.339797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.339805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.340147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.340155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.340476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.340483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.340679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.340687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.341051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.341058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.341378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.341385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.341765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.341773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.342132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.342141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.342459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.342467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.342789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.342797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.343123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.343130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.343442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.343449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.343772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.343780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.344111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.344119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.344446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.344453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.344780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.344788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.345109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.345117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.345326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.345333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.345608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.345617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.345964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.345973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.346297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.346305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.346633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.346641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.346829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.346836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.347175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.347183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.347506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.347514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.347842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.347850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.348177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.348185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.348507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.348513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.348813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.348821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.349147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.349154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.349559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.349567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.349857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.349865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.823 [2024-10-11 12:03:07.350090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.823 [2024-10-11 12:03:07.350098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.823 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.350437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.350447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.350776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.350783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.351111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.351119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.351465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.351473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.351689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.351696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.351977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.351985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.352320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.352327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.352691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.352699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.352929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.352937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.353277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.353286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.353610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.353618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.353992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.353999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.354336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.354343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.354665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.354696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.355009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.355017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.355337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.355345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.355557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.355564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.355745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.355754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.356082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.356090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.356427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.356435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.356765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.356773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.357095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.357103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.357418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.357426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.357740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.357748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.358078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.358086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.358428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.358436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.358746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.358755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.359089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.359101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.359424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.359432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.359738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.359746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.360079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.360087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.360257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.360266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.360455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.360463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.360793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.360801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.361130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.361137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.361459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.361466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.361815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.361823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.362001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.362009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.362341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.362348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.824 qpair failed and we were unable to recover it. 00:29:22.824 [2024-10-11 12:03:07.362650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.824 [2024-10-11 12:03:07.362657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.362988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.362996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.363314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.363321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.363709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.363717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.364018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.364026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.364353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.364360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.364582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.364589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.364972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.364980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.365203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.365211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.365554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.365562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.365860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.365869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.366161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.366168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.366366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.366374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.366715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.366723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.367029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.367037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.367361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.367369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.367691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.367700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.368040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.368048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.368372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.368380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.368701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.368709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.369036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.369043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.369369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.369377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.369724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.369734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.369923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.369931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.370263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.370270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.370601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.370608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.370815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.370823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.371066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.371075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.371386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.371393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.371636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.371644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.371977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.371987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.372313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.372321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.372617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.372624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.372849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.372858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.373169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.373176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.373500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.825 [2024-10-11 12:03:07.373507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.825 qpair failed and we were unable to recover it. 00:29:22.825 [2024-10-11 12:03:07.373838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.373846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.374157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.374165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.374497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.374504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.374834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.374842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.375166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.375173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.375499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.375507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.375832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.375840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.376194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.376202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.376527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.376536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.376822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.376830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.377035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.377044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.377333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.377341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.377722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.377730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.378046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.378054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.378376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.378384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.378706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.378714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.379041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.379049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.379373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.379381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.379765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.379775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.380078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.380085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.380420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.380430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.380750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.380758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.381062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.381071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.381429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.381436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.381745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.381753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.382079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.382086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.382272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.382279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.382653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.382661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.382872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.382880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.383219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.383227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.383527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.383534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.383882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.383890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.384066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.384074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.384432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.384440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.384757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.384766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.385083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.385091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.385324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.385333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.385656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.385664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.385976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.385984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.826 [2024-10-11 12:03:07.386303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.826 [2024-10-11 12:03:07.386312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.826 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.386514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.386522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.386801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.386809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.387138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.387146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.387548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.387555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.387854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.387862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.388200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.388207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.388539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.388546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.388724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.388735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.389053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.389060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.389467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.389474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.389801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.389809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.390139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.390147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.390506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.390514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.390854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.390862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.391193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.391201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.391520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.391529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.391703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.391712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.391936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.391944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.392250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.392258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.392601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.392608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.392924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.392932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.393256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.393264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.393585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.393593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.393929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.393938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.394256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.394264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.394352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.394360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.394641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.394650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.394971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.394980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.395310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.395318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.395681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.395690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.396031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.396039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.396346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.396354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.396685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.396693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.397010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.397018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.397391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.397403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.397721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.397729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.398039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.398047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.398237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.398246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.398628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.398635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.398936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.827 [2024-10-11 12:03:07.398944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.827 qpair failed and we were unable to recover it. 00:29:22.827 [2024-10-11 12:03:07.399271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.399279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.399601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.399609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.399869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.399878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.400066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.400075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.400400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.400407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.400730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.400738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.400967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.400976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.401246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.401254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.401569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.401578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.401907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.401915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.402216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.402225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.402546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.402553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.402766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.402774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.402984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.402992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.403324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.403332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.403526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.403534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.403755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.403763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.404058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.404067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.404420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.404428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.404740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.404749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.405077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.405085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.405396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.405405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.405717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.405726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.406038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.406047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.406369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.406377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.406698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.406706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.407057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.407065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.407270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.407278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.407556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.407564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.407755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.407764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.408004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.408011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.408337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.408344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.408561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.408569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.408858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.408866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.409064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.409072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.409408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.409417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.409781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.409789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.410105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.410113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.410435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.410442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.828 qpair failed and we were unable to recover it. 00:29:22.828 [2024-10-11 12:03:07.410762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.828 [2024-10-11 12:03:07.410770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.411090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.411097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.411462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.411470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.411761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.411769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.412068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.412076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.412307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.412315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.412631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.412639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.412959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.412968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.413292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.413300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.413664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.413678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.414010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.414018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.414339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.414347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.414688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.414697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.415040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.415049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.415366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.415374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.415702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.415711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.416050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.416057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.416372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.416379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.416700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.416708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.417032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.417040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.417364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.417372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.417683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.417691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.418065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.418073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.418368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.418379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.418696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.418704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.419012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.419020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.419197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.419205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.419466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.419473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.419761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.419769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.420095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.420103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.420420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.420427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.420628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.420635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.420940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.420949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.421274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.421281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.421575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.421589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.421822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.421829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.422158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.422167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.422483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.422490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.422813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.422821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.423142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.423149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.829 [2024-10-11 12:03:07.423477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.829 [2024-10-11 12:03:07.423484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.829 qpair failed and we were unable to recover it. 00:29:22.830 [2024-10-11 12:03:07.423809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.830 [2024-10-11 12:03:07.423817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.830 qpair failed and we were unable to recover it. 00:29:22.830 [2024-10-11 12:03:07.424133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.830 [2024-10-11 12:03:07.424140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.830 qpair failed and we were unable to recover it. 00:29:22.830 [2024-10-11 12:03:07.424504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.830 [2024-10-11 12:03:07.424512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.830 qpair failed and we were unable to recover it. 00:29:22.830 [2024-10-11 12:03:07.424877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.830 [2024-10-11 12:03:07.424884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.830 qpair failed and we were unable to recover it. 00:29:22.830 [2024-10-11 12:03:07.425216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.830 [2024-10-11 12:03:07.425224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.830 qpair failed and we were unable to recover it. 00:29:22.830 [2024-10-11 12:03:07.425551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.830 [2024-10-11 12:03:07.425559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.830 qpair failed and we were unable to recover it. 00:29:22.830 [2024-10-11 12:03:07.425887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.830 [2024-10-11 12:03:07.425895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.830 qpair failed and we were unable to recover it. 00:29:22.830 [2024-10-11 12:03:07.426223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.830 [2024-10-11 12:03:07.426231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:22.830 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.426497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.426508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.426816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.426830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.427153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.427160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.427478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.427486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.427803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.427811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.428132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.428140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.428455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.428462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.428789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.428797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.429130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.429137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.429365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.429374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.429718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.429725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.430053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.430060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.430397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.430404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.430706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.430714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.431034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.431041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.431365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.431373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.431695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.431703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.432026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.432033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.432363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.432371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.432690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.128 [2024-10-11 12:03:07.432697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.128 qpair failed and we were unable to recover it. 00:29:23.128 [2024-10-11 12:03:07.433024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.433031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.433272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.433279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.433592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.433600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.433943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.433950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.434156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.434163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.434509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.434516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.434821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.434828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.435148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.435155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.435456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.435464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.435793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.435801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.436127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.436135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.436455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.436462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.436783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.436791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.437102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.437111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.437440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.437448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.437797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.437804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.438131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.438138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.438470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.438479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.438802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.438810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.439136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.439144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.439465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.439473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.439795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.439803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.439996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.440004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.440346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.440353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.440654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.440662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.441030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.441038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.441365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.441373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.441581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.441589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.441918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.441926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.442256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.442263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.442629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.442637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.442813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.442829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.443126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.443133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.443302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.443310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.443589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.443598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.443950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.129 [2024-10-11 12:03:07.443958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.129 qpair failed and we were unable to recover it. 00:29:23.129 [2024-10-11 12:03:07.444174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.444181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.444519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.444526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.444841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.444850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.445170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.445177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.445499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.445508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.445821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.445830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.446158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.446169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.446490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.446499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.446816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.446825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.447146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.447154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.447485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.447494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.447819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.447828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.448002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.448011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.448318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.448329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.448654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.448662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.449001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.449008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.449335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.449343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.449674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.449682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.450002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.450010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.450335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.450344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.450539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.450548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.450849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.450858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.451090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.451097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.451429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.451437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.451781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.451789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.452121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.452128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.452455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.452463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.452787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.452795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.453074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.453082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.453388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.453396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.453736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.453745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.454090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.454097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.454426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.454433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.130 qpair failed and we were unable to recover it. 00:29:23.130 [2024-10-11 12:03:07.454740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.130 [2024-10-11 12:03:07.454749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.455084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.455091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.455269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.455278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.455602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.455610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.455865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.455873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.456072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.456080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.456361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.456370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.456700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.456710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.457039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.457054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.457408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.457415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.457727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.457735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.458028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.458035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.458356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.458363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.458686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.458693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.459037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.459045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.459331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.459338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.459644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.459652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.459984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.459992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.460328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.460335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.460740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.460748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.461046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.461055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.461377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.461385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.461708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.461716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.462037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.462044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.462364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.462372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.462693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.462702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.463043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.463051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.463371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.463379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.463569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.463577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.131 [2024-10-11 12:03:07.463982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.131 [2024-10-11 12:03:07.463990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.131 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.464320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.464327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.464493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.464502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.464840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.464849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.465172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.465179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.465496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.465506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.465696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.465704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.466040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.466048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.466376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.466384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.466705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.466714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.466878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.466888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.467220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.467228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.467548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.467556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.467890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.467897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.468057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.468065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.468410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.468418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.468597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.468605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.468901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.468909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.469118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.469126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.469458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.469466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.469639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.469647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.469888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.469896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.470195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.470203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.470594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.470604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.470898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.470908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.471230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.471238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.471564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.471572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.471897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.471904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.472252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.472261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.472560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.472568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.472889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.472898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.132 [2024-10-11 12:03:07.473214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.132 [2024-10-11 12:03:07.473223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.132 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.473411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.473420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.473618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.473627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.474061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.474070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.474388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.474397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.474725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.474734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.475065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.475072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.475390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.475397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.475614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.475622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.475897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.475911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.476241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.476248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.476647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.476656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.476976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.476983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.477301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.477308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.477628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.477635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.477850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.477861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.478177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.478185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.478514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.478522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.478727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.478735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.478915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.478923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.479225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.479232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.479573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.479581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.479923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.479931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.480255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.480263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.480573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.480581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.480756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.480764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.481154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.481161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.481481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.481488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.481813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.481821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.482135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.482143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.482464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.482471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.482663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.482682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.482913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.482922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.483263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.483272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.483518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.483527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.483916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.483924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.484240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.484248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.484450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.484460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.484732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.484741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.485035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.485043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.485239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.485246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.485495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.485502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.485848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.485859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.133 qpair failed and we were unable to recover it. 00:29:23.133 [2024-10-11 12:03:07.486166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.133 [2024-10-11 12:03:07.486173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.486495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.486503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.486830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.486838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.487143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.487151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.487472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.487480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.487812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.487820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.488145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.488154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.488474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.488481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.488802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.488810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.489134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.489142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.489545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.489554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.489845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.489852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.490179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.490187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.490513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.490520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.490840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.490848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.491176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.491183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.491500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.491507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.491820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.491828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.492146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.492153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.492477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.492484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.492805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.492813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.493140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.493147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.493477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.493484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.493818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.493826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.494139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.494146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.494474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.494481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.494813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.494824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.495142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.495149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.495471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.495479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.495802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.495810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.496149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.496157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.496481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.496489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.496808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.496817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.496998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.497006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.497212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.497220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.497420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.497428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.497627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.497635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.497951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.497958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.498300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.498307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.498678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.498686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.498982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.498990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.499319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.499326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.499653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.499660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.500004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.500012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.500333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.500340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.500515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.500523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.500862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.500870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.501190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.501199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.501519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.501528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.501744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.501752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.502092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.502099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.502388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.502395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.502724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.502732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.503081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.503088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.503410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.503417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.503644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.503652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.503981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.503988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.504305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.504313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.504650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.504657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.504993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.505001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.505169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.505178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.505535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.505542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.505845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.505853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.506180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.506187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.506507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.506514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.506835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.506843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.507253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.507261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.507573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.507580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.507890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.507898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.508221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.508229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.508551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.508559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.508894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.508901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.509229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.509237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.509563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.509571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.509896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.509905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.510095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.510104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.510472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.510480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.510804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.134 [2024-10-11 12:03:07.510812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.134 qpair failed and we were unable to recover it. 00:29:23.134 [2024-10-11 12:03:07.511126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.511134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.511466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.511474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.511792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.511801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.512122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.512131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.512449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.512457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.512787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.512795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.513150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.513158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.513482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.513491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.513680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.513693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.513992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.514000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.514391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.514399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.514503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.514510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.514822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.514830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.515160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.515167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.515499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.515507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.515725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.515733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.516004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.516015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.516336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.516343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.516678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.516686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.517004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.517012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.517330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.517337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.517650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.517657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.517878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.517886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.518234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.518242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.518605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.518613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.518843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.518851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.519193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.519200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.519517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.519525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.519892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.519899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.520104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.520112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.520467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.520475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.520812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.520820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.521001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.521011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.521441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.521449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.521861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.521868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.522191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.522199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.522524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.522531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.522844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.522852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.523182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.523189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.523514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.523521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.523875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.523882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.524092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.524099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.524419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.524427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.524836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.524848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.525127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.525136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.525340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.525348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.525624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.525631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.525937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.525945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.526272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.526279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.526483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.526490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.526850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.526858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.527179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.527187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.527479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.527486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.527813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.527821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.528140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.528148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.528464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.528472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.528793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.528800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.529119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.529127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.529320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.529328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.529511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.529519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.529835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.529842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.530171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.530178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.530480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.530487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.530814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.530823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.531176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.531184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.531562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.531570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.531898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.531906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.532223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.532231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.532449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.532458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.532791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.532798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.533111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.533122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.533419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.533426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.135 [2024-10-11 12:03:07.533743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.135 [2024-10-11 12:03:07.533751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.135 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.534079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.534086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.534279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.534287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.534659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.534666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.535004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.535012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.535379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.535386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.535711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.535720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.536047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.536054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.536351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.536359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.536684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.536691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.536932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.536939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.537263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.537270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.537476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.537484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.537701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.537709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.538028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.538036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.538221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.538230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.538399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.538407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.538590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.538598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.538952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.538960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.539288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.539296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.539618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.539625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.539928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.539936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.540227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.540234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.540339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.540346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.540618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.540626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.540960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.540967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.541349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.541357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.541684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.541692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.542067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.542076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.542396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.542403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.542725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.542732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.543058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.543065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.543396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.543404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.543726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.543734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.544071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.544079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.544396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.544403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.544727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.544735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.545054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.545061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.545371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.545379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.545709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.545719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.546049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.546057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.546376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.546383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.546693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.546701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.546940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.546947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.547167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.547175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.547513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.547521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.547839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.547846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.548169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.548177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.548500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.548507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.548830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.548838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.549170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.549177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.549481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.549489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.549858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.549866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.550162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.550169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.550495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.550502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.550800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.550808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.551032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.551039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.551418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.551427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.551605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.551613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.551968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.551975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.552280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.552287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.552609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.552616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.552913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.552921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.553294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.553301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.553611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.553618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.553953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.553962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.554277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.554287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.554611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.554619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.554940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.554948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.555268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.555276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.555604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.555612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.555951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.555959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.556263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.556271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.556559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.556566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.556892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.556900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.557267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.557276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.136 qpair failed and we were unable to recover it. 00:29:23.136 [2024-10-11 12:03:07.557553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.136 [2024-10-11 12:03:07.557561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.557887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.557896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.558257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.558265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.558626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.558633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.558966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.558974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.559289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.559296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.559618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.559625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.559944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.559952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.560279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.560286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.560591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.560599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.560831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.560838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.561044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.561051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.561428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.561435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.561773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.561781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.562013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.562021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.562349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.562356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.562685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.562693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.563008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.563018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.563425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.563433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.563756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.563763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.564157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.564166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.564509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.564516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.564842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.564850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.565177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.565184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.565506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.565513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.565799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.565806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.566136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.566143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.566476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.566483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.566705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.566713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.567039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.567047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.567364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.567371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.567666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.567691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.567876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.567884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.568270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.568277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.568605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.568613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.568976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.568984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.569186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.569194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.569531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.569540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.569909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.569918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.570232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.570240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.570610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.570617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.570924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.570932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.571367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.571374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.571678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.571686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.572012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.572019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.572343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.572350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.572677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.572685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.572902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.572909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.573173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.573181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.573524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.573531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.573844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.573851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.574156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.574164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.574482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.574491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.574708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.574717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.574803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.574811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.575103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.575110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.575447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.575455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.575780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.575787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.575971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.575978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.576318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.576325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.576639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.576646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.576967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.576974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.577296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.577304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.577688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.577696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.577989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.577997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.578361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.578368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.578691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.578699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.578943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.578951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.579280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.579287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.579590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.579598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.579817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.579826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.580161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.580169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.580396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.580404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.580727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.580735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.581057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.581064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.581356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.581363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.581688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.581696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.582020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.137 [2024-10-11 12:03:07.582028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.137 qpair failed and we were unable to recover it. 00:29:23.137 [2024-10-11 12:03:07.582335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.582342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.582665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.582677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.582997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.583006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.583327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.583334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.583638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.583646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.583970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.583978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.584295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.584304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.584526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.584537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.584881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.584889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.585207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.585215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.585535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.585543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.585845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.585855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.586203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.586210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.586607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.586615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.586981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.586988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.587295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.587303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.587620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.587627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.587834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.587842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.588176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.588183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.588522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.588530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.588870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.588878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.589069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.589077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.589324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.589331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.589647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.589654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.589977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.589985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.590308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.590315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.590609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.590617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.590937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.590945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.591267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.591274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.591595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.591602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.591881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.591889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.592210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.592218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.592426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.592434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.592752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.592760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.592955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.592965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.593290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.593297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.593500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.593509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.593830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.593837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.594165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.594173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.594485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.594492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.594816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.594824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.595210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.595217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.595513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.595520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.595846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.595854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.596180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.596188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.596510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.596518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.596818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.596826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.597057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.597064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.597415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.597422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.597770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.597778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.597984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.597991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.598316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.598324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.598737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.598745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.599079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.599088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.599411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.599420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.599614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.599623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.599921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.599929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.600147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.600155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.600481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.600490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.600807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.600815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.601239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.601247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.601558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.601569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.601886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.601893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.602304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.602312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.602630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.602637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.602966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.602974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.603307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.603314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.603634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.603642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.603969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.603976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.604299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.604307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.604626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.604635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.604959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.604968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.605297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.138 [2024-10-11 12:03:07.605306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.138 qpair failed and we were unable to recover it. 00:29:23.138 [2024-10-11 12:03:07.605529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.605538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.605848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.605857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.606221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.606232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.606562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.606571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.606860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.606869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.607200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.607209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.607515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.607524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.607840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.607848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.608075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.608083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.608417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.608426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.608738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.608746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.609076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.609083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.609405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.609412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.609643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.609651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.609958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.609966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.610307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.610315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.610647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.610655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.610971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.610980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.611278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.611287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.611614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.611623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.611945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.611953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.612123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.612132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.612413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.612428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.612722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.612731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.613054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.613063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.613384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.613391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.613719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.613727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.614051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.614060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.614376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.614384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.614716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.614730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.615058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.615066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.615389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.615397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.615726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.615734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.616109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.616118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.616441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.616448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.616685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.616693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.617035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.617043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.617367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.617375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.617589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.617597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.617819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.617828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.618142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.618149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.618470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.618478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.618804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.618812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.619201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.619211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.619577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.619585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.619920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.619928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.620259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.620266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.620591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.620599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.620892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.620900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.621230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.621239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.621523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.621531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.621864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.621872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.622198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.622206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.622523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.622531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.622848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.622856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.623179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.623188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.623503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.623514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.623840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.623848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.624159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.624167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.624463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.624471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.624791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.624799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.625103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.625110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.625433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.625440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.625743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.625756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.625961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.625969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.626293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.626300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.626661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.626676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.627002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.627009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.627329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.627336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.627623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.627631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.627947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.627955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.628275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.628283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.628490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.628498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.628875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.628885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.629208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.629216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.629520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.629527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.629745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.629753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.139 qpair failed and we were unable to recover it. 00:29:23.139 [2024-10-11 12:03:07.630122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.139 [2024-10-11 12:03:07.630129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.630457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.630466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.630782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.630791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.631119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.631128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.631455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.631463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.631744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.631753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.632086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.632098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.632182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.632192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.632476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.632484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.632819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.632827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.633150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.633158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.633486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.633495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.633814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.633822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.634144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.634152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.634481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.634488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.634820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.634828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.635202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.635210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.635514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.635524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.635874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.635884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.636214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.636224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.636549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.636557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.636884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.636893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.637110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.637130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.637458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.637465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.637758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.637765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.638095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.638103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.638434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.638442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.638721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.638730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.639061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.639068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.639409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.639417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.639746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.639754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.640075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.640083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.640317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.640325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.640645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.640652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.641052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.641061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.641366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.641373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.641699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.641707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.642045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.642053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.642357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.642365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.642683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.642691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.643017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.643025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.643351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.643358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.643766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.643781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.644118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.644126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.644466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.644474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.644665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.644683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.644967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.644975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.645279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.645287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.645609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.645616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.646017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.646026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.646390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.646398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.646711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.646719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.647043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.647051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.647346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.647354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.647680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.647688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.647998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.648006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.648337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.648345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.648680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.648689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.648988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.648995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.649329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.649337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.649706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.649714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.650003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.650011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.650413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.650421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.650723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.650731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.651036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.651044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.651376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.651384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.651555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.651565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.651804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.651813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.652092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.652101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.652410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.652418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.652735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.652743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.653023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.653030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.140 qpair failed and we were unable to recover it. 00:29:23.140 [2024-10-11 12:03:07.653361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.140 [2024-10-11 12:03:07.653369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.653710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.653718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.654037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.654051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.654413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.654422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.654650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.654657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.654965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.654974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.655298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.655306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.655627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.655634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.655972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.655980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.656298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.656310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.656632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.656640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.656841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.656852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.657159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.657166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.657499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.657508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.657826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.657834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.658199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.658207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.658541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.658550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.658835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.658844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.659184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.659193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.659524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.659532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.659845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.659855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.660189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.660198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.660604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.660614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.660936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.660945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.661291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.661306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.661635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.661643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.661969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.661978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.662295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.662302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.662618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.662628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.662947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.662960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.663274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.663284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.663605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.663615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.663946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.663955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.664265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.664274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.664680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.664689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.665024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.665032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.665412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.665421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.665747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.665755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.665939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.665948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.666253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.666261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.666600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.666607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.666934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.666944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.667267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.667275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.667600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.667608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.667935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.667942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.668167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.668175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.668498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.668505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.668858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.668865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.669191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.669199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.669522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.669530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.669844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.669852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.670180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.670187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.670511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.670519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.670848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.670856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.671194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.671203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.671524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.671533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.671841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.671850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.672179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.672188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.672391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.672399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.672722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.672730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.673036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.673044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.673367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.673375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.673696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.673704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.674050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.674063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.674471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.674480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.674799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.674807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.675109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.675117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.675441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.675448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.675768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.675776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.676098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.676105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.676431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.676439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.676784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.676792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.677042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.677050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.677262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.677271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.677470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.677478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.677777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.677785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.678124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.141 [2024-10-11 12:03:07.678131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.141 qpair failed and we were unable to recover it. 00:29:23.141 [2024-10-11 12:03:07.678452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.678460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.678783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.678790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.679123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.679132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.679326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.679333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.679675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.679683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.679748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.679755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.680111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.680120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.680475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.680484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.680819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.680828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.681155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.681164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.681493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.681501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.681809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.681817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.682144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.682151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.682472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.682480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.682805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.682813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.683120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.683128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.683453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.683460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.683627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.683636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.684000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.684008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.684345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.684353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.684678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.684688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.684904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.684912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.685195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.685202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.685547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.685554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.685848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.685856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.686169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.686177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.686504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.686511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.686730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.686738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.686970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.686978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.687320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.687327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.687700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.687707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.688014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.688022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.688242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.688249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.688579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.688588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.688908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.688916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.689220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.689229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.689559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.689568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.689860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.689868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.690264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.690272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.690618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.690626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.690967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.690976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.691297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.691306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.691482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.691492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.691803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.691811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.692150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.692157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.692477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.692484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.692814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.692823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.693158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.693168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.693486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.693495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.693822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.693830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.694117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.694125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.694420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.694428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.694783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.694791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.695120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.695128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.695431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.695439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.695648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.695659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.695946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.695955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.696288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.696296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.696473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.696482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.696807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.696816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.697132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.697140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.697467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.697475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.697680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.697688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.698061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.698069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.698398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.698406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.698772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.698781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.699084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.699092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.699412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.699421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.699586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.699596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.699822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.699830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.700125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.700134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.700468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.700477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.700827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.700836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.142 qpair failed and we were unable to recover it. 00:29:23.142 [2024-10-11 12:03:07.701071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.142 [2024-10-11 12:03:07.701081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.701350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.701361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.701532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.701540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.701831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.701840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.702119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.702126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.702432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.702440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.702741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.702749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.703013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.703023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.703331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.703339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.703678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.703686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.704039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.704049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.704371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.704380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.704702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.704709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.705034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.705043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.705362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.705369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.705678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.705687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.706031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.706039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.706341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.706349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.706683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.706691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.707018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.707027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.707347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.707356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.707680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.707691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.708003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.708013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.708329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.708341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.708654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.708663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.708988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.708996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.709303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.709312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.709635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.709642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.709982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.709990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.710337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.710347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.710662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.710678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.710992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.711001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.711327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.711337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.711651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.711660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.711911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.711920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.712242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.712250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.712553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.712561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.712894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.712902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.713213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.713222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.713443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.713452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.713810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.713818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.714178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.714186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.714491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.714499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.714836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.714844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.715047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.715055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.715354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.715363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.715686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.715696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.716027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.716035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.716356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.716364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.716685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.716694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.717007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.717015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.717340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.717348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.717541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.717549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.717853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.717861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.718247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.718255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.718565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.718574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.718884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.718892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.719217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.719228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.719521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.719529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.719845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.719852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.720186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.720195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.720498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.720507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.720826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.720837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.721152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.721161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.721533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.721542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.721862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.721871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.722077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.722086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.722362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.722370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.722712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.722720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.723051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.723061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.723378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.723386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.723706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.723714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.723945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.723953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.724288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.724296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.724617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.724624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.725032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.725042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.143 [2024-10-11 12:03:07.725383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.143 [2024-10-11 12:03:07.725390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.143 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.725711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.725720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.726053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.726062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.726383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.726393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.726712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.726721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.727055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.727063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.727275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.727283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.727632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.727641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.727860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.727868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.728163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.728172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.728526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.728536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.728844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.728853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.144 [2024-10-11 12:03:07.729179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.144 [2024-10-11 12:03:07.729188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.144 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.729405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.729418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.729740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.729750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.730074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.730083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.730407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.730414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.730739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.730750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.731074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.731085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.731437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.731446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.731786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.731798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.732142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.732158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.732330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.732340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.732646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.732654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.732913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.732921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.733251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.733261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.733467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.733476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.733745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.733754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.733974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.733983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.734267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.734275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.734499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.734508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.734720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.734728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.734945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.734953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.735297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.735304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.735599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.735607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.735989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.735997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.736317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.736326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.736658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.736666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.737014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.737021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.737326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.737333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.737553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.737560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.737910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.737918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.738246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.738253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.738440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.738448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.738744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.738753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.739087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.739095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.739418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.430 [2024-10-11 12:03:07.739425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.430 qpair failed and we were unable to recover it. 00:29:23.430 [2024-10-11 12:03:07.739743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.739750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.740081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.740089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.740415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.740423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.740790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.740800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.741039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.741048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.741376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.741385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.741705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.741714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.741916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.741927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.742227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.742236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.742553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.742561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.742902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.742910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.743228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.743238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.743413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.743423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.743765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.743772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.744086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.744094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.744420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.744427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.744743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.744751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.745082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.745090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.745409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.745417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.745616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.745624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.745910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.745918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.746266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.746274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.746468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.746475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.746779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.746788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.746893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.746900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.747174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.747182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.747505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.747514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.747698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.747708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.748037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.748046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.748346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.748354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.748685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.748694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.749060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.749070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.749395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.749403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.749734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.749743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.750059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.750066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.750401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.750411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.750617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.750625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.750958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.750968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.431 qpair failed and we were unable to recover it. 00:29:23.431 [2024-10-11 12:03:07.751149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.431 [2024-10-11 12:03:07.751157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.751479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.751488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.751821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.751828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.752233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.752245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.752559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.752566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.752761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.752771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.753150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.753157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.753476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.753486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.753808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.753817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.754039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.754047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.754403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.754411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.754773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.754782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.755108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.755116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.755544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.755553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.755885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.755893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.756187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.756195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.756410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.756417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.756619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.756627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.757025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.757034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.757235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.757243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.757552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.757564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.757761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.757769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.758063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.758072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.758398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.758408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.758596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.758603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.758925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.758934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.759265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.759273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.759593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.759602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.759993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.760001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.760267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.760275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.760606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.760616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.760906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.760914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.761234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.761242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.761571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.761579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.761865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.761875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.762204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.762212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.762540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.432 [2024-10-11 12:03:07.762549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.432 qpair failed and we were unable to recover it. 00:29:23.432 [2024-10-11 12:03:07.762873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.762884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.763214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.763224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.763397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.763407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.763574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.763582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.763894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.763902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.764229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.764236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.764614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.764621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.764929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.764938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.765129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.765138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.765342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.765349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.765675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.765685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.765994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.766002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.766202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.766210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.766550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.766559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.766893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.766901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.767116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.767125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.767454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.767464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.767795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.767805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.768131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.768140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.768340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.768348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.768562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.768573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.768952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.768959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.769279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.769288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.769457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.769465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.769804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.769813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.770068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.770075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.770394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.770410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.770723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.770732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.771085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.771094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.771423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.771431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.771647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.771655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.771966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.771975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.772317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.772326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.772540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.772548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.772915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.772925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.773250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.773258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.773463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.773471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.773748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.433 [2024-10-11 12:03:07.773756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.433 qpair failed and we were unable to recover it. 00:29:23.433 [2024-10-11 12:03:07.773969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.773976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.774216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.774223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.774599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.774607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.774963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.774972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.775277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.775285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.775617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.775626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.775846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.775854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.776140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.776148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.776472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.776481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.776804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.776811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.777130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.777140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.777463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.777470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.777795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.777804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.778041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.778048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.778384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.778391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.778633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.778645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.778978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.778986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.779290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.779298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.779623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.779631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.779925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.779935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.780232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.780242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.780440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.780448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.780765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.780773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.780951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.780958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.781272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.781280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.781602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.781610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.781926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.781934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.782269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.782276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.782568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.782576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.782913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.782922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.783182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.783191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.783527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.783535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.783835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.783844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.784027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.784037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.434 [2024-10-11 12:03:07.784351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.434 [2024-10-11 12:03:07.784361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.434 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.784684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.784692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.785018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.785026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.785366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.785374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.785680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.785689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.785911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.785920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.786255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.786262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.786467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.786475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.786838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.786846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.787225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.787233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.787456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.787463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.787736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.787744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.787934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.787942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.788266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.788275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.788592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.788601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.788904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.788912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.789241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.789253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.789560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.789567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.789907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.789915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.790224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.790232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.790534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.790542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.790872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.790881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.791205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.791213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.791534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.791542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.791776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.791785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.792097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.792105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.792527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.792536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.792842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.792851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.793154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.793161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.793382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.793389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.793753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.793762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.794147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.794155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.794457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.794465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.794778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.794787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.795122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.795130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.795446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.795456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.795664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.795679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.795966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.795975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.796303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.796311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.796637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.796644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.796826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.435 [2024-10-11 12:03:07.796835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.435 qpair failed and we were unable to recover it. 00:29:23.435 [2024-10-11 12:03:07.797128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.797136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.797369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.797377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.797705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.797716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.798052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.798060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.798351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.798359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.798544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.798554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.798891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.798900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.799235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.799252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.799576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.799584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.799976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.799984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.800274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.800282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.800614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.800622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.800949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.800958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.801281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.801289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.801623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.801630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.801947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.801956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.802294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.802302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.802553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.802561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.802779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.802788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.803123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.803131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.803465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.803473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.803811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.803819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.804087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.804096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.804434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.804442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.804770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.804779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.805102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.805110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.805434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.805442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.805765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.805774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.806100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.806109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.806426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.806433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.806746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.806756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.806928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.806938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.807253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.807261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.807503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.807511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.807798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.807807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.808125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.808132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.808350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.808358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.436 [2024-10-11 12:03:07.808558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.436 [2024-10-11 12:03:07.808566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.436 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.808883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.808891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.809200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.809208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.809540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.809550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.809877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.809885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.810097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.810104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.810455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.810463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.810818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.810826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.811143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.811151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.811485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.811494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.811815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.811824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.812199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.812206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.812502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.812509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.812587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.812594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.812884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.812891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.813110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.813118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.813458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.813469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.813795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.813803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.814109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.814116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.814436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.814443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.814770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.814778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.815109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.815116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.815438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.815448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.815742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.815750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.816138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.816147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.816342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.816349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.816550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.816556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.816792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.816800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.817137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.817144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.817442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.817451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.817759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.817768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.818065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.818074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.818415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.818423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.818636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.818647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.818876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.818884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.819182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.819190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.819392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.819400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.819736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.819744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.820072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.437 [2024-10-11 12:03:07.820080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.437 qpair failed and we were unable to recover it. 00:29:23.437 [2024-10-11 12:03:07.820407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.820416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.820715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.820724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.821032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.821040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.821356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.821364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.821683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.821692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.822036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.822043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.822361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.822369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.822698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.822708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.823036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.823044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.823404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.823411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.823741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.823749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.824083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.824092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.824419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.824427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.824675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.824685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.825017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.825026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.825344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.825352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.825675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.825683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.825974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.825981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.826316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.826323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.826521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.826529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.826841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.826848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.827159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.827171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.827497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.827505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.827813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.827821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.828056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.828063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.828342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.828349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.828684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.828692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.829050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.829058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.829372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.829382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.829704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.829713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.830050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.830057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.830449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.830458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.830771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.830779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.831105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.831113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.831288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.831298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.831602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.831611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.831925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.831934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.832253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.832260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.832577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.832585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.832908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.832916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.438 [2024-10-11 12:03:07.833234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.438 [2024-10-11 12:03:07.833242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.438 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.833564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.833573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.833903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.833911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.834281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.834288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.834602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.834610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.834932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.834940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.835287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.835295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.835619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.835626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.835834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.835844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.836190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.836199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.836450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.836459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.836801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.836810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.837007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.837016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.837295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.837302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.837701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.837709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.838051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.838060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.838386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.838395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.838716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.838725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.838923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.838931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.839267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.839274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.839500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.839508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.839783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.839792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.840124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.840131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.840442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.840449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.840768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.840776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.841069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.841077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.841396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.841403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.841709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.841716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.842054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.842061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.842382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.842389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.842718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.842729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.843124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.843133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.843488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.843496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.843646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.843655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.439 qpair failed and we were unable to recover it. 00:29:23.439 [2024-10-11 12:03:07.843980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.439 [2024-10-11 12:03:07.843987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.844320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.844328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.844663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.844679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.845001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.845008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.845322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.845331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.845655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.845663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.845994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.846002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.846365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.846372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.846698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.846706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.847031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.847038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.847342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.847350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.847672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.847682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.847896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.847904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.848295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.848303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.848619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.848627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.848946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.848955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.849279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.849287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.849609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.849620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.849937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.849947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.850269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.850277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.850580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.850587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.850915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.850922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.851226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.851234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.851553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.851561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.851745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.851754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.852132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.852141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.852477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.852485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.852834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.852842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.853157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.853165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.853534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.853541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.853850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.853859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.854169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.854178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.854516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.854525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.854844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.854852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.855239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.855247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.855583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.855590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.855896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.855904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.856324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.856333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.856518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.856527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.440 [2024-10-11 12:03:07.856837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.440 [2024-10-11 12:03:07.856847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.440 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.857223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.857231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.857564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.857572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.857864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.857875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.858166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.858173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.858502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.858510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.858842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.858850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.859187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.859195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.859375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.859390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.859720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.859728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.860011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.860019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.860303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.860312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.860628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.860635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.860975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.860983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.861311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.861321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.861648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.861656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.861986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.861994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.862330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.862338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.862681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.862689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.862811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.862817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.863087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.863094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.863198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.863205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.863487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.863496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.863824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.863832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.864136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.864144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.864473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.864481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.864697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.864706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.865131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.865139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.865361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.865369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.865742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.865750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.866090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.866100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.866430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.866439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.866722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.866731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.867073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.867081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.867372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.867380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.867627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.867635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.867769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.867778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.868089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.868097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.868297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.868304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.868516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.868524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.441 [2024-10-11 12:03:07.868750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.441 [2024-10-11 12:03:07.868758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.441 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.869050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.869057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.869463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.869472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.869788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.869796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.870101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.870110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.870433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.870440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.870657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.870665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.871059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.871068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.871447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.871454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.871647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.871656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.871912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.871920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.872260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.872268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.872588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.872595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.872935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.872944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.873238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.873245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.873556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.873565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.873878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.873886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.874197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.874206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.874580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.874587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.874916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.874924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.875189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.875196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.875543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.875550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.875773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.875781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.876075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.876082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.876462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.876470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.876582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.876589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.876996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.877004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.877329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.877339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.877536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.877545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.877863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.877871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.878263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.878271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.878605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.878613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.878821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.878829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.879130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.879139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.879492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.879500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.879815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.879823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.880159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.880167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.880498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.880506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.880741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.880749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.442 qpair failed and we were unable to recover it. 00:29:23.442 [2024-10-11 12:03:07.881089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.442 [2024-10-11 12:03:07.881097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.881385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.881393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.881506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.881513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.881702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.881710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.881939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.881950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.882293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.882302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.882618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.882626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.882924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.882932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.883293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.883303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.883475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.883483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.883856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.883864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.884214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.884222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.884572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.884580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.884874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.884882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.885282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.885291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.885637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.885644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.886036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.886044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.886268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.886275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.886483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.886490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.886771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.886781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.887003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.887011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.887260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.887267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.887606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.887614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.887841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.887849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.888138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.888145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.888455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.888463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.888815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.888824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.889161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.889168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.889416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.889424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.889625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.889633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.889835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.889844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.890085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.890093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.890435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.890443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.890814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.890822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.891143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.443 [2024-10-11 12:03:07.891150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.443 qpair failed and we were unable to recover it. 00:29:23.443 [2024-10-11 12:03:07.891489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.891496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.891692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.891700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.892046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.892054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.892361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.892370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.892716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.892724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.893044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.893052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.893451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.893458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.893789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.893796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.894120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.894128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.894342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.894350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.894595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.894604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.894959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.894970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.895141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.895150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.895475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.895484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.895792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.895799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.896127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.896134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.896482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.896491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.896580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.896587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.896881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.896890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.897226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.897235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.897539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.897547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.897865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.897874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.898199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.898208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.898541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.898548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.898861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.898870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.899201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.899210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.899506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.899513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.899747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.899755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.900070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.900078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.900418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.900427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.900742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.900751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.900975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.900984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.901215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.901225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.901560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.901569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.901814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.901823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.902142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.902152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.902535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.902543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.444 qpair failed and we were unable to recover it. 00:29:23.444 [2024-10-11 12:03:07.902946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.444 [2024-10-11 12:03:07.902954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.903286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.903295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.903656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.903664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.903891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.903900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.904302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.904310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.904553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.904561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.904814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.904825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.905161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.905168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.905356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.905364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.905702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.905710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.905941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.905949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.906292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.906299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.906626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.906635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.906947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.906955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.907348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.907357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.907699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.907710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.907798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.907806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.908152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.908160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.908487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.908494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.908756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.908764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.909082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.909091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.909413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.909420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.909617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.909625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.909758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.909765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.910054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.910061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.910394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.910403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.910739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.910746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.911103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.911111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.911429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.911436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.911698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.911706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.912057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.912064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.912475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.912482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.912818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.912828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.913226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.913234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.913427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.913435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.913804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.913811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.914135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.914142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.914466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.914474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.914712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.914720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.445 qpair failed and we were unable to recover it. 00:29:23.445 [2024-10-11 12:03:07.915008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.445 [2024-10-11 12:03:07.915017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.915336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.915345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.915706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.915713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.916039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.916047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.916411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.916420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.916735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.916742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.916945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.916954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.917249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.917258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.917679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.917689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.917948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.917955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.918276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.918284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.918610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.918617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.918928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.918936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.919271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.919278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.919606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.919615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.919937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.919947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.920278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.920288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.920620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.920628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.920847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.920855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.921185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.921193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.921519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.921527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.921820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.921829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.922170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.922181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.922375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.922383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.922679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.922688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.923061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.923069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.923371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.923379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.923704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.923712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.924101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.924108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.924490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.924500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.924744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.924755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.925087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.925095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.925413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.925422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.925628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.925637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.925981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.925990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.926307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.926315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.926629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.926639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.926946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.926954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.927283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.927290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.446 qpair failed and we were unable to recover it. 00:29:23.446 [2024-10-11 12:03:07.927611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.446 [2024-10-11 12:03:07.927618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.927919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.927927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.928259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.928267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.928575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.928583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.928911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.928919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.929245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.929255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.929575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.929583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.929921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.929929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.930259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.930266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.930586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.930593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.930962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.930971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.931298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.931308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.931412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.931420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.931524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.931533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.931814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.931822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.932036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.932044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.932372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.932379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.932700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.932708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.933009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.933019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.933341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.933350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.933673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.933682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.933996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.934003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.934323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.934331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.934653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.934660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.934968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.934976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.935339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.935346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.935647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.935657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.936038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.936047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.936246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.936253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.936551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.936558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.936913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.936921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.937249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.937256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.937577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.937585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.937933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.937941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.938264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.938275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.938598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.938607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.938990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.938998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.939304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.939311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.939638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.447 [2024-10-11 12:03:07.939646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.447 qpair failed and we were unable to recover it. 00:29:23.447 [2024-10-11 12:03:07.939968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.939976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.940318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.940328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.940654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.940663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.940923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.940931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.941103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.941111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.941387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.941395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.941721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.941728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.942042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.942050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.942372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.942382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.942724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.942732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.943064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.943072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.943247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.943255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.943600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.943607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.943915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.943923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.944249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.944258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.944565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.944573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.944919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.944927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.945151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.945158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.945490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.945497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.945859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.945867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.946182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.946189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.946513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.946520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.946840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.946850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.947166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.947174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.947499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.947507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.947830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.947839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.948176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.948183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.948394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.948402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.948740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.948749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.949109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.949118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.949442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.949451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.949775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.949783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.448 [2024-10-11 12:03:07.950181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.448 [2024-10-11 12:03:07.950189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.448 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.950504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.950511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.950718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.950726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.951094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.951101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.951432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.951442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.951657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.951665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.951981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.951989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.952235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.952243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.952564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.952572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.952899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.952908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.953225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.953232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.953636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.953646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.953965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.953973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.954280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.954287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.954603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.954610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.954929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.954939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.955258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.955265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.955576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.955584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.955901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.955909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.956244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.956254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.956534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.956542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.956841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.956850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.957190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.957197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.957398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.957406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.957698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.957707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.958051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.958059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.958387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.958397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.958720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.958729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.959063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.959070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.959290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.959297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.959462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.959469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.959851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.959860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.960196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.960205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.960535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.960543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.960853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.960861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.961196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.961203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.961529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.961536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.961706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.961714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.962062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.962071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.962395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.962404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.449 [2024-10-11 12:03:07.962732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.449 [2024-10-11 12:03:07.962741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.449 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.963067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.963074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.963394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.963405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.963725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.963733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.964048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.964056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.964247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.964254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.964571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.964578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.964909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.964916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.965270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.965279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.965486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.965495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.965825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.965833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.966161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.966169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.966493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.966500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.966831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.966839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.967187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.967195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.967518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.967528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.967933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.967942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.968241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.968248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.968555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.968563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.968898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.968908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.969218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.969227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.969546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.969555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.969767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.969776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.970057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.970064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.970387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.970394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.970690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.970697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.970909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.970916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.971195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.971204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.971544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.971552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.971858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.971872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.972208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.972215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.972421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.972428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.972665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.972697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.973023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.973031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.973374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.973382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.973737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.973745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.974153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.974163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.974482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.450 [2024-10-11 12:03:07.974490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.450 qpair failed and we were unable to recover it. 00:29:23.450 [2024-10-11 12:03:07.974856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.974865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.975182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.975189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.975520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.975528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.975755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.975762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.975938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.975945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.976350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.976359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.976694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.976703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.977026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.977033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.977350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.977358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.977683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.977690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.978011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.978019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.978308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.978315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.978644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.978653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.979001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.979009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.979314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.979321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.979647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.979656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.979979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.979988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.980308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.980317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.980633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.980642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.980972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.980982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.981310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.981317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.981638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.981646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.981972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.981981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.982327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.982335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.982658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.982666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.982995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.983004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.983310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.983319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.983626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.983635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.983940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.983948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.984273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.984281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.984569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.984576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.984768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.984776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.985160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.985167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.985564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.985575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.985935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.985943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.986265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.986273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.986593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.986600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.986977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.451 [2024-10-11 12:03:07.986987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.451 qpair failed and we were unable to recover it. 00:29:23.451 [2024-10-11 12:03:07.987309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.987316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.987634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.987644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.987990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.987998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.988318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.988326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.988533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.988540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.988822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.988830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.989197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.989204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.989531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.989540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.989862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.989871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.990181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.990188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.990526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.990534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.990845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.990853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.991186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.991194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.991514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.991523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.991888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.991896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.992180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.992189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.992511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.992519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.992847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.992855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.993182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.993190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.993381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.993389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.993676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.993685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.993896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.993905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.994241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.994249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.994574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.994583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.994878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.994887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.995120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.995129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.995350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.995357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.995574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.995581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.995949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.995957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.996282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.996289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.996585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.996594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.997001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.997009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.997307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.997316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.997658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.997666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.998025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.998033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.998336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.452 [2024-10-11 12:03:07.998343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.452 qpair failed and we were unable to recover it. 00:29:23.452 [2024-10-11 12:03:07.998594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:07.998603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:07.998900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:07.998909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:07.999113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:07.999121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:07.999329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:07.999336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:07.999664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:07.999681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:07.999975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:07.999982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.000333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.000341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.000678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.000686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.001008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.001015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.001346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.001355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.001682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.001691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.001998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.002007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.002232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.002242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.002449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.002456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.002796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.002803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.003134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.003142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.003461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.003470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.003792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.003800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.003929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.003937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.004170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.004177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.004592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.004601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.004939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.004946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.005267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.005274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.005595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.005604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.005958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.005967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.006301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.006308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.006613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.006621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.006943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.006951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.007296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.007304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.007623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.007633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.007958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.007966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.008290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.008298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.008513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.008521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.008815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.008823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.009135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.009143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.009322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.009329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.009647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.009655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.009968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.009978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.453 qpair failed and we were unable to recover it. 00:29:23.453 [2024-10-11 12:03:08.010149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.453 [2024-10-11 12:03:08.010158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.010481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.010488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.010821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.010830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.011155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.011162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.011377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.011384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.011741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.011748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.012037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.012045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.012262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.012271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.012360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.012369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.012611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.012619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.012949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.012958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.013304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.013310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.013627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.013634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.013964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.013972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.014271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.014279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.014602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.014611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.014833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.014841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.015172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.015179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.015394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.015401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.015719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.015727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.015952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.015961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.016173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.016182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.016527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.016538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.016842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.016851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.017042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.017050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.017380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.017388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.017764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.017772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.017958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.017966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.018292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.018300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.018653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.018660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.018989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.018999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.019304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.019312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.019637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.019646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.019867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.019875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.020228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.020235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.020527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.020535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.020877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.020885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.021216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.021223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.021548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.021559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.021933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.454 [2024-10-11 12:03:08.021942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.454 qpair failed and we were unable to recover it. 00:29:23.454 [2024-10-11 12:03:08.022243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.022251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.022571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.022578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.022898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.022908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.023235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.023244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.023564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.023573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.023905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.023913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.024089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.024097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.024462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.024470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.024795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.024802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.025209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.025220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.025546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.025556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.025897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.025905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.026228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.026236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.026569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.026577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.026904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.026913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.027274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.027282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.027485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.027494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.027815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.027823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.028149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.028158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.028470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.028479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.028800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.028808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.028985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.028994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.029329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.029336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.029666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.029680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.030036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.030044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.030444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.030454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.030772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.030780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.030987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.030995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.031270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.031278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.031592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.031601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.031925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.031933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.032270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.032277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.032488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.032497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.032839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.032847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.033145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.033153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.033380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.033387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.033732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.033740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.034059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.034075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.034394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.034401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.034724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.455 [2024-10-11 12:03:08.034734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.455 qpair failed and we were unable to recover it. 00:29:23.455 [2024-10-11 12:03:08.035057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.456 [2024-10-11 12:03:08.035065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.456 qpair failed and we were unable to recover it. 00:29:23.456 [2024-10-11 12:03:08.035476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.456 [2024-10-11 12:03:08.035483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.456 qpair failed and we were unable to recover it. 00:29:23.456 [2024-10-11 12:03:08.035816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.456 [2024-10-11 12:03:08.035824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.456 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.035995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.036007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.036342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.036349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.036578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.036586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.036928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.036935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.037310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.037320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.037645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.037654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.038028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.038036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.038335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.038343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.038679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.038688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.039014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.039021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.039337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.039344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.039659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.039666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.039994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.040002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.040325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.040336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.040553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.040561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.040833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.040842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.041171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.041178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.041377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.041385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.041770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.041780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.042090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.042097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.042475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.042485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.042812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.042821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.043088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.043096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.043477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.043484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.043820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.768 [2024-10-11 12:03:08.043828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.768 qpair failed and we were unable to recover it. 00:29:23.768 [2024-10-11 12:03:08.044151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.044159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.044359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.044366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.044735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.044743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.045121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.045131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.045505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.045513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.045850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.045859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.046175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.046182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.046512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.046520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.046845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.046853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.047192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.047200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.047526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.047534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.047866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.047875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.048327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.048335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.048692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.048701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.049014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.049022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.049182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.049189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.049537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.049544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.049839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.049847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.050188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.050195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.050571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.050579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.050792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.050802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.051143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.051150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.051476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.051484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.051813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.051820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.052124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.052133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.052476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.052483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.052706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.052714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.053020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.053029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.053340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.053347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.053680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.053689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.053996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.054004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.054308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.054315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.054645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.054653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.054880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.054888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.055232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.055239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.055546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.055554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.055736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.055744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.056115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.056123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.056466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.056473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.056689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.769 [2024-10-11 12:03:08.056697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.769 qpair failed and we were unable to recover it. 00:29:23.769 [2024-10-11 12:03:08.057056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.057063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.057374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.057382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.057729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.057737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.057967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.057975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.058208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.058216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.058539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.058548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.058747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.058755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.059102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.059110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.059435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.059441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.059755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.059763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.060115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.060122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.060340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.060347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.060708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.060716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.061100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.061109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.061454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.061462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.061710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.061718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.062072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.062081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.062403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.062417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.062735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.062743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.063001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.063009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.063307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.063314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.063460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.063466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.063679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.063687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.063966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.063975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.064319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.064326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.064655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.064662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.064895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.064902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.065226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.065234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.065560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.065568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.065787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.065795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.066089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.066098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.066421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.066428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.066745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.066753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.067092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.067100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.067206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.067213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.067491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.067500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.067860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.067868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.068244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.068253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.068470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.068478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.068870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.770 [2024-10-11 12:03:08.068878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.770 qpair failed and we were unable to recover it. 00:29:23.770 [2024-10-11 12:03:08.069079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.069086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.069430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.069437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.069753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.069761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.070100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.070110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.070315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.070323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.070692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.070701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.071108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.071117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.071442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.071450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.071705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.071713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.071975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.071982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.072364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.072371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.072590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.072597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.072951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.072959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.073255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.073263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.073596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.073603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.073909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.073917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.074273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.074280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.074584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.074592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.074746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.074755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.075081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.075089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.075427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.075435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.075758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.075765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.075999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.076007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.076232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.076285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.076649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.076658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.076945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.076953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.077182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.077190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.077406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.077414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.077691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.077700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.078035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.078042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.078350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.078358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.078642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.078649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.078985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.078994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.079309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.079316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.079657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.079666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.080053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.080061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.080280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.080288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.080656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.080664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.081077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.081084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.081412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.081419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.771 [2024-10-11 12:03:08.081701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.771 [2024-10-11 12:03:08.081709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.771 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.081988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.081995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.082313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.082321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.082654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.082661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.082889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.082897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.083217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.083225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.083522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.083530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.083706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.083714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.083992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.083999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.084223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.084230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.084583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.084590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.084783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.084791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.085002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.085010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.085344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.085353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.085686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.085696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.086013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.086020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.086347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.086355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.086683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.086690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.086852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.086859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.087046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.087054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.087388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.087395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.087700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.087709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.088032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.088039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.088280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.088287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.088523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.088532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.088809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.088817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.089162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.089169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.089500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.089507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.089800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.089809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.090059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.090068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.090387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.090395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.090622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.090633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.090944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.090954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.091154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.772 [2024-10-11 12:03:08.091162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.772 qpair failed and we were unable to recover it. 00:29:23.772 [2024-10-11 12:03:08.091495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.091503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.091858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.091866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.092171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.092178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.092497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.092505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.092715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.092723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.093092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.093100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.093430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.093438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.093553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.093561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.093892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.093900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.094233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.094241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.094558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.094568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.094838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.094848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.095181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.095189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.095542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.095550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.095903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.095911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.096192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.096200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.096426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.096434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.096620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.096630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.096973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.096981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.097341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.097350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.097682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.097690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.097953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.097960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.098344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.098351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.098675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.098684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.099003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.099015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.099396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.099404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.099736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.099743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.100115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.100123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.100503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.100510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.100730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.100739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.101057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.101064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.101390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.101398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.101645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.101653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.102026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.102034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.102339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.102347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.102561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.102570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.102743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.102752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.102983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.102992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.103308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.103317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.103676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.103686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.773 [2024-10-11 12:03:08.103880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.773 [2024-10-11 12:03:08.103889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.773 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.104219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.104228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.104458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.104466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.104647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.104656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.104868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.104876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.105093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.105100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.105402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.105410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.105757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.105765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.105830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.105837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.106199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.106207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.106365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.106373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.106795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.106806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.107114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.107123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.107453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.107462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.107699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.107708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.107932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.107940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.108377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.108386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.108577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.108586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.108939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.108949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.109273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.109283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.109612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.109620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.109945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.109954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.110284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.110293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.110680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.110689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.110846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.110855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.111212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.111222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.111431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.111440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.111774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.111783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.111985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.111995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.112215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.112224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.112461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.112470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.112787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.112796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.113109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.113120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.113322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.113330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.113617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.113626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.113764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.113772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.113970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.113980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.114326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.114334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.114665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.114682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.115062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.115072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.115403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.774 [2024-10-11 12:03:08.115413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.774 qpair failed and we were unable to recover it. 00:29:23.774 [2024-10-11 12:03:08.115740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.115750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.115946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.115957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.116167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.116177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.116393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.116402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.116631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.116641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.116779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.116787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.117100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.117109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.117297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.117308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.117485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.117494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.117948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.117958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.118205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.118214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.118539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.118551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.118902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.118912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.119244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.119253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.119576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.119585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.119959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.119969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.120197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.120207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.120522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.120532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.120768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.120776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.121069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.121077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.121409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.121417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.121722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.121730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.121978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.121985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.122312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.122319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.122536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.122545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.122915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.122924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.123262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.123270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.123602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.123610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.123983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.123991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.124328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.124335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.124728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.124739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.125048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.125056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.125379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.125388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.125733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.125741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.126135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.126145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.126463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.126471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.126712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.126721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.127069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.127078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.127366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.127376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.127575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.127584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.775 qpair failed and we were unable to recover it. 00:29:23.775 [2024-10-11 12:03:08.127852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.775 [2024-10-11 12:03:08.127860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.128212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.128220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.128397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.128406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.128799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.128808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.129011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.129019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.129228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.129236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.129315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.129323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.129514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.129523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.129735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.129752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.129992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.130000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.130339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.130347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.130687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.130696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.130920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.130928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.131231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.131239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.131422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.131430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.131634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.131641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.132018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.132028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.132225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.132233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.132569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.132578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.132777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.132785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.133122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.133129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.133299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.133308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.133607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.133617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.133950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.133958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.134279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.134289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.134611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.134622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.134885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.134893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.135115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.135123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.135485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.135494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.135929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.135937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.136263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.136271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.136593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.136600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.136910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.136920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.137245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.137253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.137416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.137424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.137834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.137843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.138048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.138059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.138402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.138409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.138711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.138719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.139066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.139074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.776 [2024-10-11 12:03:08.139382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.776 [2024-10-11 12:03:08.139390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.776 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.139731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.139739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.140077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.140085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.140399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.140407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.140594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.140602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.140907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.140916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.141245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.141253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.141587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.141595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.142050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.142058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.142443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.142450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.142692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.142700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.142927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.142935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.143250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.143258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.143451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.143460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.143711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.143720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.144068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.144076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.144377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.144385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.144706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.144715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.145079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.145087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.145311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.145320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.145601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.145608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.145944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.145952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.146283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.146291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.146609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.146617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.146878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.146887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.147213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.147221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.147568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.147575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.147874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.147883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.148211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.148218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.148428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.148435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.148641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.148650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.148930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.148938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.149163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.149171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.149494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.149503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.149832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.149840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.150172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.777 [2024-10-11 12:03:08.150179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.777 qpair failed and we were unable to recover it. 00:29:23.777 [2024-10-11 12:03:08.150554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.150562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.150874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.150882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.151211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.151219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.151521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.151529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.151866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.151874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.152204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.152213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.152540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.152548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.152843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.152851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.153190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.153198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.153517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.153525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.153847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.153854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.154181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.154190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.154512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.154521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.154724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.154733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.155067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.155074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.155478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.155486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.155780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.155789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.156119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.156129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.156540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.156549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.156886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.156895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.157230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.157238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.157560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.157567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.157775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.157784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.158008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.158016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.158316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.158325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.158652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.158661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.158876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.158885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.159207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.159216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.159535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.159544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.159840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.159848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.160152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.160159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.160482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.160489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.160703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.160710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.161098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.161106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.161514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.161521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.161872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.161880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.162106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.162113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.162452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.162460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.162779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.162787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.163120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.163128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.778 [2024-10-11 12:03:08.163452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.778 [2024-10-11 12:03:08.163461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.778 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.163789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.163797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.164529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.164537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.164843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.164852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.165184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.165194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.165519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.165527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.165736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.165743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.166054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.166062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.166408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.166415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.166743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.166753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.167088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.167096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.167278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.167285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.167677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.167686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.168077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.168084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.168399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.168407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.168584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.168593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.168801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.168809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.169126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.169133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.169434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.169441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.169776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.169786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.170155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.170162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.170489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.170498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.170890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.170898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.171173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.171180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.171508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.171516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.171855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.171863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.172213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.172222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.172434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.172443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.172663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.172680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.172915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.172922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.173192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.173199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.173535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.173545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.173853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.173860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.174173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.174180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.174507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.174517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.174728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.174737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.175120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.175127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.175441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.175449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.175788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.175796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.176004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.176012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.779 [2024-10-11 12:03:08.176364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.779 [2024-10-11 12:03:08.176371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.779 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.176690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.176699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.177064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.177072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.177383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.177390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.177732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.177743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.178060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.178068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.178373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.178381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.178701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.178708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.179026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.179034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.179397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.179406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.179700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.179709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.179887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.179897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.180240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.180247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.180551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.180559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.180884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.180891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.181240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.181248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.181563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.181572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.181894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.181903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.182231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.182238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.182553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.182560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.182803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.182810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.183138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.183147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.183471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.183478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.183666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.183691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.183922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.183931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.184220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.184227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.184426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.184434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.184709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.184717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.185040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.185048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.185355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.185364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.185689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.185700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.185917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.185925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.186239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.186251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.186575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.186583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.186882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.186890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.187243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.187252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.187572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.187581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.187922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.187930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.188250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.188261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.188549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.188558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.188909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.188919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.780 qpair failed and we were unable to recover it. 00:29:23.780 [2024-10-11 12:03:08.189247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.780 [2024-10-11 12:03:08.189255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.189569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.189578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.189939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.189948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.190256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.190264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.190587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.190597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.190804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.190813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.191164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.191173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.191497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.191506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.191746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.191753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.192086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.192093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.192415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.192422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.192738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.192748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.192958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.192966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.193203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.193210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.193500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.193507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.193859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.193867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.194209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.194217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.194428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.194435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.194715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.194724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.194980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.194990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.195321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.195329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.195674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.195682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.195905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.195914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.196252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.196259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.196566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.196573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.196895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.196902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.197213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.197221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.197550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.197559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.197891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.197899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.198275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.198283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.198583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.198590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.198914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.198921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.199236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.199245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.199562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.199569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.199775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.199785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.200061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.200069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.781 qpair failed and we were unable to recover it. 00:29:23.781 [2024-10-11 12:03:08.200403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.781 [2024-10-11 12:03:08.200412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.200736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.200744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.201065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.201072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.201381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.201389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.201725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.201733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.202112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.202121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.202446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.202455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.202783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.202790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.203199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.203208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.203520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.203529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.203800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.203808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.204142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.204149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.204454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.204464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.204785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.204794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.205106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.205113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.205404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.205411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.205734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.205742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.206061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.206068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.206398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.206406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.206726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.206734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.207053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.207064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.207387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.207394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.207721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.207729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.208007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.208014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.208352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.208359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.208699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.208708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.209100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.209108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.209427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.209437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.209761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.209770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.210092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.210100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.210426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.210433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.210740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.210748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.211109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.211116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.211442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.211450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.211799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.211807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.212148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.212155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.212456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.212463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.212693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.212701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.213083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.213091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.213413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.213422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.782 [2024-10-11 12:03:08.213655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.782 [2024-10-11 12:03:08.213664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.782 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.213976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.213984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.214300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.214307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.214654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.214662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.214895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.214903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.215239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.215246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.215553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.215560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.215891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.215899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.216201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.216211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.216576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.216585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.216928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.216936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.217257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.217265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.217587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.217595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.217919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.217928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.218241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.218250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.218462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.218473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.218788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.218796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.219123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.219131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.219454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.219461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.219787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.219795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.220153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.220160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.220469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.220476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.220794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.220804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.221129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.221137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.221458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.221467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.221792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.221799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.222125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.222133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.222448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.222455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.222779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.222788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.223113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.223122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.223447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.223456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.223800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.223809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.224138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.224147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.224474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.224481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.224808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.224816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.225010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.225019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.225328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.225335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.225659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.225681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.226005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.226013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.226206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.226214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.226584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.226591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.783 qpair failed and we were unable to recover it. 00:29:23.783 [2024-10-11 12:03:08.226927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.783 [2024-10-11 12:03:08.226935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.227254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.227261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.227584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.227593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.227934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.227943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.228138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.228145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.228470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.228477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.228795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.228803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.229092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.229100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.229427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.229435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.229764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.229774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.230111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.230120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.230437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.230444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.230761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.230768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.231103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.231110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.231406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.231414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.231739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.231748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.232064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.232072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.232398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.232407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.232729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.232737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.232997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.233004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.233199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.233207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.233438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.233445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.233766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.233775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.234020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.234031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.234243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.234250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.234624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.234633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.234952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.234961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.235285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.235293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.235597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.235604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.235919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.235927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.236136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.236143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.236464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.236472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.236795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.236804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.236991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.237000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.237343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.237350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.237681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.237689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.237957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.237964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.238308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.238316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.238689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.238699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.239039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.239047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.784 [2024-10-11 12:03:08.239379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.784 [2024-10-11 12:03:08.239387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.784 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.239706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.239714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.240084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.240092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.240413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.240421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.240776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.240785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.241104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.241112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.241476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.241485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.241802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.241810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.242128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.242135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.242458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.242465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.242793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.242803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.243173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.243180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.243499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.243509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.243828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.243836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.244173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.244181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.244506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.244512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.244820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.244829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.245139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.245148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.245502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.245513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.245830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.245840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.246160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.246169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.246493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.246501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.246731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.246739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.247078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.247085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.247405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.247413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.247736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.247746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.248066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.248074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.248399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.248406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.248731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.248739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.248927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.248936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.249226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.249233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.249567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.249575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.249881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.249890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.250193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.250201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.250521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.250529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.250842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.785 [2024-10-11 12:03:08.250850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.785 qpair failed and we were unable to recover it. 00:29:23.785 [2024-10-11 12:03:08.251171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.251179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.251485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.251492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.251811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.251819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.251993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.252002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.252494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.252503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.252708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.252716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.253046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.253054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.253394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.253402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.253730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.253738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.254061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.254069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.254388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.254397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.254721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.254731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.255069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.255077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.255400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.255409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.255726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.255734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.256091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.256102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.256318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.256326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.256551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.256558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.256873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.256880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.257214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.257224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.257560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.257569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.257893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.257901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.258225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.258232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.258545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.258552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.258841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.258849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.259188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.259198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.259505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.259513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.259835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.259844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.260163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.260171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.260542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.260549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.260871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.260879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.261257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.261264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.261656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.261663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.261966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.261976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.262297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.262306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.262622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.262631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.262953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.262961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.263276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.263284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.263620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.263628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.263953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.263961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.264198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.786 [2024-10-11 12:03:08.264207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.786 qpair failed and we were unable to recover it. 00:29:23.786 [2024-10-11 12:03:08.264535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.264545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.264848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.264858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.265166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.265174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.265506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.265514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.265730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.265738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.266107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.266114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.266442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.266452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.266784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.266803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.267107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.267114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.267327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.267336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.267688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.267696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.268001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.268009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.268351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.268360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.268565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.268576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.268784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.268794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.269125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.269134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.269447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.269455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.269626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.269633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.269978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.269986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.270308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.270315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.270636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.270644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.270833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.270842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.271179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.271188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.271403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.271411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.271740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.271748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.272068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.272075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.272398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.272405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.272727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.272735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.273062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.273072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.273389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.273399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.273720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.273729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.274145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.274153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.274477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.274485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.274814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.274821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.275156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.275164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.275498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.275508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.275823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.275832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.276172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.276181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.276492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.276499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.276824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.787 [2024-10-11 12:03:08.276831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.787 qpair failed and we were unable to recover it. 00:29:23.787 [2024-10-11 12:03:08.277155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.277163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.277368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.277376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.277764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.277774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.277992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.278000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.278284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.278291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.278611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.278619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.278947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.278955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.279279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.279287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.279604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.279613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.279946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.279955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.280273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.280282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.280600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.280608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.280935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.280944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.281264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.281273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.281458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.281466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.281776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.281783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.282118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.282126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.282450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.282459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.282635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.282643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.283013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.283020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.283227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.283234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.283437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.283445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.283755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.283763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.284089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.284096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.284419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.284430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.284750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.284759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.285084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.285091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.285409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.285416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.285739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.285746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.286083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.286090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.286407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.286415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.286737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.286746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.286944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.286953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.287331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.287339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.287688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.287697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.288011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.288018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.288322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.288330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.288651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.288658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.288990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.288998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.289212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.289222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.289546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.788 [2024-10-11 12:03:08.289554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.788 qpair failed and we were unable to recover it. 00:29:23.788 [2024-10-11 12:03:08.289936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.289944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.290261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.290268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.290470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.290479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.290777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.290784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.291096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.291104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.291444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.291453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.291787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.291804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.292120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.292128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.292431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.292439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.292802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.292810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.293135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.293142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.293463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.293472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.293799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.293808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.294135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.294142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.294441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.294449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.294738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.294748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.295051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.295059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.295475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.295485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.295811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.295819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.296147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.296155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.296373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.296381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.296705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.296712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.297073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.297081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.297404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.297411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.297738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.297762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.297999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.298007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.298350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.298359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.298552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.298560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.298893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.298901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.299148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.299157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.299482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.299489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.299809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.299817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.300122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.300129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.300475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.300482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.300806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.300815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.301140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.301149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.301501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.301508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.301810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.301818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.302135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.302142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.302429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.302437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.789 [2024-10-11 12:03:08.302768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.789 [2024-10-11 12:03:08.302779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.789 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.303134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.303144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.303465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.303474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.303786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.303794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.304115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.304122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.304422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.304429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.304783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.304793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.305120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.305129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.305453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.305461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.305791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.305799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.305976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.305985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.306191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.306198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.306497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.306504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.306838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.306845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.307183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.307191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.307556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.307566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.307888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.307897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.308219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.308226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.308549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.308557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.308884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.308892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.309213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.309220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.309541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.309548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.309887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.309897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.310178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.310186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.310435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.310442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.310625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.310633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.310994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.311002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.311323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.311331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.311652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.311663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.311903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.311912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.312200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.312209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.312535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.312543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.312843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.312851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.313154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.313161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.313415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.313422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.313726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.313733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.790 [2024-10-11 12:03:08.313926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.790 [2024-10-11 12:03:08.313935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.790 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.314273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.314281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.314618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.314626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.314950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.314958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.315159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.315175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.315411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.315418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.315732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.315740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.316047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.316054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.316373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.316381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.316592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.316602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.316809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.316816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.317082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.317090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.317414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.317421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.317749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.317757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.317982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.317991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.318259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.318266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.318589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.318597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.318949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.318958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.319266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.319275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.319642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.319650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.319956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.319964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.320285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.320292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.320607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.320615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.320954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.320964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.321263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.321271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.321605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.321614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.321977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.321984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.322318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.322325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.322646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.322654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.322853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.322862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.323057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.323065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.323363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.323374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.323596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.323605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.323891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.323900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.324303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.324314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.324628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.324637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.324965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.324974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.325294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.325303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.325632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.325640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 [2024-10-11 12:03:08.325817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.791 [2024-10-11 12:03:08.325826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.791 qpair failed and we were unable to recover it. 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Write completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.791 starting I/O failed 00:29:23.791 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 [2024-10-11 12:03:08.326612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Read completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 Write completed with error (sct=0, sc=8) 00:29:23.792 starting I/O failed 00:29:23.792 [2024-10-11 12:03:08.327443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.792 [2024-10-11 12:03:08.328005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.328127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a8000b90 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.328515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.328527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.329010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.329067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.329435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.329444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.329931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.329988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.330228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.330239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.330588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.330596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.330785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.330799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.331193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.331201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.331524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.331532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.331867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.331876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.332204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.332213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.332428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.332438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.332659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.332676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.332979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.332987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.333318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.333326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.333662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.333678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.333965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.333972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.334340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.334348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.792 qpair failed and we were unable to recover it. 00:29:23.792 [2024-10-11 12:03:08.334643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.792 [2024-10-11 12:03:08.334650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.334881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.334889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.335214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.335224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.335540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.335548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.335858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.335867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.336147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.336155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.336473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.336481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.336806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.336813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.337125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.337133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.337456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.337466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.337780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.337789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.338105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.338112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.338437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.338445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.338740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.338748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.338960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.338969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.339285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.339292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.339691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.339702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.340036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.340044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.340237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.340244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.340610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.340617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.340923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.340931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.341244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.341251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.341584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.341594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.341900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.341907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.342235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.342243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.342557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.342564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.342888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.342896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.343220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.343227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.343547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.343555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.343889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.343897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.344108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.344115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.344458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.344468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.344779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.344788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.345004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.345013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.345298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.345307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.345627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.345634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.345933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.345942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.346260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.346267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.346575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.346584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.346921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.346930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.347252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.793 [2024-10-11 12:03:08.347260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.793 qpair failed and we were unable to recover it. 00:29:23.793 [2024-10-11 12:03:08.347584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.347592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.347896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.347904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.348227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.348234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.348552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.348560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.348894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.348905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.349228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.349236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.349555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.349563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.349739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.349748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.350093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.350102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.350427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.350436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.350749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.350759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.351061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.351069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.351362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.351370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.351549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.351557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.351882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.351890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.352219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.352229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.352400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.352412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.352786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.352793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.353129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.353136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.353462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.353469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.353686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.353696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.353882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.353891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.354168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.354176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.354584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.354591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.354952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.354960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.355285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.355293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.355616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.355625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.355949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.355959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.356260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.356268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.356595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.356603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.356911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.356919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.357142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.357160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.357477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.357484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.357683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.357691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.357926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.794 [2024-10-11 12:03:08.357934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.794 qpair failed and we were unable to recover it. 00:29:23.794 [2024-10-11 12:03:08.358151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.358159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.358490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.358498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.358829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.358837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.359071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.359079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.359460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.359470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.359786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.359794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.360119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.360127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.360455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.360465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.360768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.360777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.361111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.361119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.361425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.361433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.361747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.361755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.362094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.362102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.362467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.362475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.362837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.362854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.363212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.363220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.363546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.363554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.363897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.363905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.364100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.364109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.364405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.364414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.364747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.364754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.364978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.364986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.365250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.365259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.365603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.365612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.365915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.365925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.366137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.366145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.366367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.366375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.366546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.366554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.366812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.366820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.367164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.367172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.367385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.367394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.367598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.367607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.367966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.367976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.368305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.368314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.368651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.368660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.369035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.369044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.369384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.369393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.369722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.369730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.370055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.370063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.370410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.370417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.795 qpair failed and we were unable to recover it. 00:29:23.795 [2024-10-11 12:03:08.370740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.795 [2024-10-11 12:03:08.370749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.371097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.371104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.371427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.371435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.371746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.371754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.372079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.372087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.372287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.372295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.372570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.372579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.373051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.373059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.373265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.373273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.373475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.373482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.373817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.373827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.374169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.374177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.374351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.374360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.374756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.374764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.375109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.375116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.375485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.375493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.375792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.375801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:23.796 [2024-10-11 12:03:08.376141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.796 [2024-10-11 12:03:08.376149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:23.796 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.376557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.376570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.377231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.377240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.377581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.377589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.377914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.377922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.378273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.378281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.378610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.378619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.378851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.378860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.379256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.379264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.379616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.379624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.379953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.379961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.380275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.380283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.380613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.380621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.380993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.381002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.381331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.381338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.381666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.381689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.381987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.381995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.382351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.382359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.382565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.382576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.382886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.382894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.383195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.383203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.383526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.383534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.383838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.383846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.384174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.384182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.384515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.384523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.384764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.384772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.385213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.385222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.385537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.385544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.385741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.385752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.386126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.386134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.386460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.386468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.386731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.386739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.089 qpair failed and we were unable to recover it. 00:29:24.089 [2024-10-11 12:03:08.387144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.089 [2024-10-11 12:03:08.387153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.387527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.387535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.387819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.387828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.388205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.388214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.388436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.388445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.388644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.388652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.389017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.389026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.389207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.389215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.389431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.389439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.389636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.389643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.389937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.389945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.390161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.390169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.390498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.390506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.390813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.390824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.391211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.391219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.391550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.391557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.391788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.391796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.392152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.392161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.392464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.392472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.392802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.392809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.393126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.393134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.393337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.393353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.393689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.393697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.393976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.393984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.394324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.394331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.394616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.394624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.394936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.394944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.395338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.395345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.395683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.395692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.396021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.396029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.396358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.396365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.396675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.396684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.396969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.396976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.397190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.397198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.397541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.397550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.397867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.397876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.398202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.398210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.398434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.398443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.398742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.398751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.090 [2024-10-11 12:03:08.399059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.090 [2024-10-11 12:03:08.399068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.090 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.399370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.399381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.399600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.399607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.399801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.399808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.400133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.400142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.400460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.400467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.400859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.400868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.401150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.401157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.401508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.401518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.401869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.401878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.402200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.402208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.402503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.402511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.402802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.402810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.403121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.403128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.403316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.403323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.403649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.403659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.403996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.404004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.404312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.404320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.404608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.404615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.404939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.404949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.405312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.405320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.405513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.405521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.405744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.405752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.406101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.406109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.406320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.406328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.406666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.406682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.406979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.406987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.407319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.407328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.407398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.407407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.407696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.407705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.408073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.408080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.408390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.408398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.408717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.408725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.409176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.409183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.409504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.409512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.409852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.091 [2024-10-11 12:03:08.409860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.091 qpair failed and we were unable to recover it. 00:29:24.091 [2024-10-11 12:03:08.410166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.410174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.410507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.410514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.410732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.410739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.411086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.411093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.411394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.411402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.411486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.411492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.411767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.411775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.412111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.412118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.412338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.412345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.412690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.412697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.412939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.412946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.413326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.413336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.413521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.413530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.413851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.413860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.414169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.414178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.414490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.414497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.414723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.414730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.415087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.415094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.415392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.415401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.415725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.415733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.416042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.416051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.416409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.416418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.416729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.416737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.416966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.416974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.417282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.417289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.417602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.417610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.417702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.417710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.417947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.417955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.418300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.418309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.418605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.418622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.418939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.418946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.419249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.419258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.419571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.419578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.419993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.420006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.420339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.420346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.420675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.420685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.421104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.421111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.421336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.421343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.421691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.421699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.422028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.092 [2024-10-11 12:03:08.422035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.092 qpair failed and we were unable to recover it. 00:29:24.092 [2024-10-11 12:03:08.422363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.422372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.422554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.422564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.422880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.422888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.423184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.423191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.423535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.423543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.423866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.423873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.424039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.424047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.424439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.424447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.424784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.424794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.425033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.425041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.425377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.425384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.425574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.425582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.425883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.425890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.426101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.426109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.426487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.426494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.426860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.426867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.427104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.427112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.427434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.427444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.427789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.427798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.428101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.428110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.428431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.428441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.428738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.428746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.429073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.429080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.429387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.429395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.429717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.429726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.429908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.429916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.430304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.430311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.430634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.430641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.430865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.430873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.431208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.431215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.431534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.431543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.431843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.431851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.432075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.432082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.093 qpair failed and we were unable to recover it. 00:29:24.093 [2024-10-11 12:03:08.432363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.093 [2024-10-11 12:03:08.432370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.432559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.432566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.432870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.432878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.433211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.433218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.433530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.433538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.433941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.433949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.434151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.434159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.434399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.434407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.434722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.434730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.435115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.435122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.435337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.435344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.435748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.435756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.436092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.436100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.436431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.436440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.436738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.436745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.437086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.437093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.437313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.437321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.437638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.437645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.437856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.437864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.438223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.438230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.438563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.438571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.438913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.438921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.439249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.439256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.439659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.439666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.439980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.439987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.440310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.440319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.440661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.440677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.440984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.440991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.441302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.441310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.441408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.441418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.441611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.441619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.441790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.441800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.442148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.442155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.442322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.442330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.442718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.442725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.443046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.443054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.443384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.443391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.443696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.443705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.094 qpair failed and we were unable to recover it. 00:29:24.094 [2024-10-11 12:03:08.444057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.094 [2024-10-11 12:03:08.444064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.444372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.444380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.444710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.444718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.445047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.445055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.445381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.445389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.445695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.445704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.445936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.445944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.446340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.446347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.446567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.446573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.446900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.446915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.447134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.447142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.447486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.447495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.447829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.447837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.448069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.448076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.448279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.448287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.448484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.448491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.448682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.448690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.448801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.448810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.449167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.449176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.449528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.449536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.449853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.449861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.450185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.450193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.450517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.450525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.450769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.450777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.451118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.451125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.451436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.451443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.451799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.451806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.452127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.452136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.452551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.452562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.452887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.452896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.453281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.453288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.453527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.453535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.453868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.453876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.454216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.454223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.454433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.454440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.454736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.454744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.455065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.455073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.455398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.095 [2024-10-11 12:03:08.455405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.095 qpair failed and we were unable to recover it. 00:29:24.095 [2024-10-11 12:03:08.455627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.455635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.455872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.455880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.456219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.456226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.456533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.456540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.456856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.456863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.457180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.457187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.457518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.457529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.457733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.457741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.458058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.458066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.458394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.458401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.458602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.458609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.458983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.458993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.459305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.459312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.459553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.459561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.459871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.459879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.460097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.460105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.460377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.460384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.460709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.460717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.460939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.460946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.461228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.461236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.461422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.461429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.461755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.461763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.462115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.462124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.462453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.462462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.462840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.462848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.463061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.463068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.463436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.463442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.463697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.463705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.464041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.464048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.464370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.464378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.464710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.464718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.465042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.465050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.465371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.465377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.465838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.465848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.466185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.466193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.466535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.466544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.466733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.466741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.466948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.466954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.467176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.467184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.467516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.467524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.096 [2024-10-11 12:03:08.467725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.096 [2024-10-11 12:03:08.467732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.096 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.468042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.468050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.468380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.468387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.468705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.468713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.468942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.468950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.469331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.469338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.469534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.469542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.469848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.469858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.470182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.470189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.470473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.470481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.470582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.470590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.470768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.470776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.471018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.471026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.471358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.471365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.471695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.471702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.471938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.471945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.472286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.472293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.472619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.472626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.472894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.472901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.473242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.473249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.473609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.473616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.473823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.473832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.474282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.474289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.474496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.474504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.474863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.474870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.475059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.475066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.475479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.475488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.475682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.475691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.476022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.476030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.476355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.476363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.476705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.476712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.477036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.477044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.477408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.477415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.477727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.477735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.477961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.477970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.478151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.478158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.478505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.478514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.097 [2024-10-11 12:03:08.478850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.097 [2024-10-11 12:03:08.478858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.097 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.479191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.479199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.479518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.479526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.479721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.479729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.480176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.480186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.480482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.480490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.480807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.480814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.481143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.481151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.481361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.481369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.481474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.481483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.481769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.481776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.482092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.482100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.482428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.482435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.482741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.482749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.482947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.482957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.483300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.483308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.483629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.483637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.483968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.483975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.484279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.484287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.484549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.484558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.484899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.484906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.485084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.485094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.485455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.485465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.485798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.485806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.486127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.486137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.486464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.486471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.486785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.486793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.486997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.487004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.487346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.487356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.487682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.487691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.487990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.487998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.488318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.488325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.098 [2024-10-11 12:03:08.488511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.098 [2024-10-11 12:03:08.488519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.098 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.488875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.488882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.489254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.489263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.489589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.489597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.489889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.489897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.490234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.490242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.490584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.490593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.490929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.490937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.491069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.491076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.491384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.491391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.491721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.491729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.491935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.491944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.492232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.492240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.492585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.492593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.493022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.493030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.493350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.493357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.493684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.493691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.493988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.493996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.494237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.494248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.494569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.494581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.494799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.494806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.495153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.495162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.495484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.495492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.495814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.495822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.496130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.496137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.496457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.496466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.496794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.496803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.497132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.497139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.497447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.497455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.497639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.497648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.497972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.497980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.498171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.498179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.498512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.498521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.498829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.498837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.499170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.499177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.499581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.499588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.499890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.499898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.500094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.500102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.500474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.500481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.500678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.500687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.501051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.099 [2024-10-11 12:03:08.501059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.099 qpair failed and we were unable to recover it. 00:29:24.099 [2024-10-11 12:03:08.501394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.501402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.501736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.501744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.502069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.502078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.502401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.502408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.502717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.502725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.503054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.503062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.503392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.503400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.503721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.503730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.504075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.504085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.504300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.504309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.504626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.504633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.504962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.504970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.505290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.505297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.505699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.505708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.506041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.506052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.506374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.506382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.506706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.506713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.507031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.507039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.507366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.507374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.507695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.507703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.508048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.508055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.508374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.508382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.508707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.508715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.509035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.509043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.509362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.509369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.509679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.509687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.510007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.510015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.510302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.510311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.510569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.510578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.510876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.510885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.511052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.511062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.511405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.511414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.511746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.511754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.512086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.512094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.512421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.512429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.512750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.512757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.513086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.513093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.513413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.513421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.100 qpair failed and we were unable to recover it. 00:29:24.100 [2024-10-11 12:03:08.513751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.100 [2024-10-11 12:03:08.513758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.514082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.514089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.514409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.514416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.514740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.514748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.515073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.515080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.515489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.515498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.515813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.515821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.516029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.516037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.516316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.516326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.516652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.516659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.516981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.516989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.517201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.517209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.517538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.517545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.517865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.517873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.518210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.518217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.518597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.518604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.518910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.518918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.519245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.519252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.519570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.519578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.519893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.519900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.520225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.520233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.520555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.520563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.520802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.520810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.521148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.521156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.521476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.521485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.521797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.521805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.522119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.522126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.522354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.522361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.522698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.522707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.523048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.523055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.523359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.523367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.523719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.523726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.524104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.524113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.524427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.524434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.524833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.524842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.525175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.525185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.525490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.525498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.525807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.525814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.526139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.101 [2024-10-11 12:03:08.526147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.101 qpair failed and we were unable to recover it. 00:29:24.101 [2024-10-11 12:03:08.526479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.526487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.526794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.526802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.527130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.527137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.527464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.527472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.527791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.527798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.528128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.528136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.528466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.528473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.528801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.528809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.529145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.529152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.529318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.529326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.529613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.529621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.529952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.529960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.530284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.530292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.530508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.530516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.530850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.530857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.531191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.531199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.531555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.531562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.531891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.531907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.532192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.532200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.532602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.532610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.532962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.532969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.533278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.533285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.533609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.533617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.534006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.534016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.534352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.534360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.534539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.534548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.534874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.534881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.535195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.535202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.535526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.535534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.535851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.535859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.536180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.536187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.102 qpair failed and we were unable to recover it. 00:29:24.102 [2024-10-11 12:03:08.536518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.102 [2024-10-11 12:03:08.536526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.536847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.536855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.537059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.537067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.537278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.537286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.537616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.537623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.537922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.537930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.538258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.538266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.538570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.538577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.538896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.538904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.539222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.539230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.539558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.539565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.539789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.539796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.540168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.540176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.540494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.540502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.540692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.540701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.541030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.541038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.541358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.541365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.541691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.541699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.542037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.542045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.542369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.542376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.542688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.542696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.543060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.543067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.543389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.543397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.543720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.543728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.544056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.544064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.544406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.544413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.544735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.544743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.545127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.545134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.545348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.545355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.545580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.545588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.545924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.545931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.546259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.546266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.546560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.546568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.546858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.546872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.547104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.547113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.547436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.547445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.547787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.547796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.548124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.548133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.548451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.548458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.103 qpair failed and we were unable to recover it. 00:29:24.103 [2024-10-11 12:03:08.548692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.103 [2024-10-11 12:03:08.548700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.549005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.549013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.549262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.549270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.549556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.549564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.549741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.549749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.550080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.550087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.550402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.550419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.550750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.550758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.551089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.551097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.551422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.551430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.551734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.551742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.552157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.552164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.552487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.552494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.552708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.552716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.552959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.552966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.553343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.553350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.553565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.553573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.553928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.553935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.554114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.554122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.554308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.554316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.554525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.554533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.554814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.554824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.555167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.555175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.555354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.555361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.555677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.555684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.555852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.555860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.556139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.556146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.556338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.556346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.556631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.556639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.556966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.556974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.557144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.557152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.557403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.557411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.557720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.557729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.558055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.558063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.558400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.558408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.558773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.558780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.559186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.559193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.559526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.559533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.559863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.559871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.104 [2024-10-11 12:03:08.560085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.104 [2024-10-11 12:03:08.560095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.104 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.560384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.560392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.560714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.560722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.561055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.561062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.561394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.561402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.561725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.561732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.562063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.562071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.562403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.562411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.562630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.562637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.562960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.562971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.563306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.563313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.563684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.563692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.563994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.564002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.564324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.564331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.564654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.564662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.564984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.564993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.565325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.565333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.565724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.565733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.566033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.566040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.566377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.566385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.566706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.566713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.567031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.567039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.567427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.567434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.567761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.567769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.567964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.567972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.568292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.568301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.568626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.568634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.568827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.568835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.569206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.569214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.569540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.569547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.569886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.569894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.570221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.570229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.570545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.570552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.570890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.570898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.571107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.571115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.571438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.571445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.571744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.571752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.572093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.572100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.572306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.572313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.572655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.105 [2024-10-11 12:03:08.572663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.105 qpair failed and we were unable to recover it. 00:29:24.105 [2024-10-11 12:03:08.572974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.572982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.573312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.573320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.573640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.573648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.573865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.573873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.574208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.574215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.574540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.574547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.574862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.574870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.575203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.575211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.575537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.575546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.575844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.575852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.576253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.576261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.576539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.576547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.576870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.576878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.577206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.577214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.577544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.577551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.577884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.577892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.578228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.578235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.578562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.578570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.578912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.578921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.579248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.579256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.579572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.579589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.580009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.580018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.580411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.580419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.580740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.580747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.581068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.581076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.581405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.581413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.581739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.581746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.582076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.582084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.582411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.582418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.582749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.582756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.583111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.583118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.583423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.583430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.583532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.583539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.583906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.583913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.584221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.584229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.584333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.584340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.584577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.584585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.584906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.584916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.106 [2024-10-11 12:03:08.585280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.106 [2024-10-11 12:03:08.585289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.106 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.585616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.585625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.585953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.585963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.586288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.586296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.586623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.586630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.587054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.587062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.587480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.587490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.587812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.587820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.588017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.588025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.588363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.588371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.588700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.588709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.589049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.589058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.589229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.589239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.589617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.589627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.589918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.589926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.590234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.590242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.590445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.590454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.590794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.590802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.591126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.591134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.591459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.591467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.591789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.591796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.592118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.592126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.592445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.592453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.592783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.592790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.593112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.593118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.593451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.593457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.593780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.593792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.594216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.594222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.594538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.594545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.594856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.594865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.595104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.595115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.595453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.107 [2024-10-11 12:03:08.595461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.107 qpair failed and we were unable to recover it. 00:29:24.107 [2024-10-11 12:03:08.595698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.595706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.596015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.596024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.596376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.596384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.596619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.596627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.596947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.596955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.597277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.597286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.597458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.597468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.597819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.597828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.598146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.598155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.598415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.598424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.598743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.598753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.599080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.599090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.599408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.599417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.599735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.599744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.600094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.600103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.600294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.600303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.600644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.600653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.600996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.601006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.601381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.601390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.601693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.601702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.602041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.602050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.602235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.602248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.602552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.602560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.602896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.602907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.603267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.603277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.603605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.603615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.603868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.603877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.604094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.604103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.604326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.604335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.604680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.604689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.605017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.605026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.605344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.605354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.605695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.605704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.605933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.605941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.606240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.606249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.606576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.606585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.606894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.108 [2024-10-11 12:03:08.606903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.108 qpair failed and we were unable to recover it. 00:29:24.108 [2024-10-11 12:03:08.607225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.607233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.607560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.607568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.607887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.607896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.608228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.608237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.608566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.608575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.608646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.608654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.608945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.608955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.609274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.609282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.609504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.609513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.609846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.609855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.610039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.610047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.610465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.610574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.611233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.611339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.611908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.612017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.612488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.612524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.612807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.612822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.613204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.613213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.613526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.613533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.613833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.613841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.614017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.614026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.614388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.614396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.614793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.614801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.614909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.614916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.615087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.615095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.615421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.615431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.615808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.615817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.616006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.616014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.616406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.616413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.616768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.616777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.617129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.617137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.617463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.617471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.617804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.617813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.618050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.618059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.109 qpair failed and we were unable to recover it. 00:29:24.109 [2024-10-11 12:03:08.618388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.109 [2024-10-11 12:03:08.618395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.618716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.618724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.619021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.619028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.619390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.619398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.619686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.619695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.620017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.620026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.620360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.620369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.620680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.620690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.621019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.621027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.621202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.621210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.621468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.621475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.621803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.621810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.622000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.622008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.622297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.622306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.622539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.622548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.622848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.622856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.623164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.623172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.623498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.623506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.623826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.623834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.624161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.624170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.624497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.624506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.624728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.624736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.625060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.625067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.625392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.625400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.625719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.625727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.626056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.626066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.626437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.626444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.626806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.626814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.627155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.627163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.627370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.627378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.627577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.627584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.627876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.627885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.628223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.628231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.628562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.628570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.628769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.628778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.110 qpair failed and we were unable to recover it. 00:29:24.110 [2024-10-11 12:03:08.629171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.110 [2024-10-11 12:03:08.629179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.629486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.629493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.629822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.629831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.630152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.630160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.630487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.630497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.630724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.630732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.631060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.631068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.631258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.631266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.631647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.631655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.631849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.631857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.632262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.632269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.632570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.632579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.632900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.632908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.633260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.633268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.633602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.633610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.633963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.633978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.634301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.634308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.634519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.634526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.634808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.634816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.635018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.635026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.635260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.635268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.635446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.635455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.635770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.635778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.636085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.636092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.636298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.636305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.636648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.636655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.636981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.636989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.637322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.637329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.637523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.637531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.637774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.637782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.638113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.638121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.638455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.638462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.638654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.638662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.638906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.638914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.639256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.639264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.639587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.639596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.111 [2024-10-11 12:03:08.639955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.111 [2024-10-11 12:03:08.639964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.111 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.640136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.640145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.640542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.640553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.640745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.640754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.641140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.641150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.641472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.641480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.641803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.641811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.642132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.642139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.642467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.642474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.642894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.642903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.643223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.643230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.643467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.643474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.643799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.643806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.644160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.644168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.644498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.644505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.644834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.644843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.645021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.645031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.645217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.645224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.645570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.645577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.645888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.645896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.646240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.646248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.646571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.646578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.647000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.647009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.647222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.647231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.647567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.647574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.647875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.647883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.648075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.648084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.648414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.648421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.648744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.648752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.649089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.649097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.649303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.649311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.649679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.649687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.112 [2024-10-11 12:03:08.649929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.112 [2024-10-11 12:03:08.649938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.112 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.650293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.650302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.650633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.650641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.650855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.650864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.651195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.651203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.651622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.651630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.651980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.651988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.652316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.652325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.652631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.652639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.652904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.652913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.653235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.653244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.653562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.653573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.653900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.653908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.654235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.654244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.654564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.654572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.654700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.654708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.655034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.655043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.655361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.655370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.655695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.655704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.656058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.656065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.656388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.656396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.656762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.656769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.657069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.657076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.657394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.657400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.657730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.657738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.657932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.657941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.658300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.658308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.658677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.658685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.659011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.659018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.659211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.659219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.659585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.659592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.659810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.659818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.660086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.660094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.660311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.660318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.660649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.660657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.113 [2024-10-11 12:03:08.661025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.113 [2024-10-11 12:03:08.661033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.113 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.661341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.661349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.661557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.661564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.661873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.661884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.662234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.662241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.662562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.662569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.662896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.662903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.663213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.663221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.663412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.663421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.663713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.663722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.663939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.663946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.664279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.664287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.664605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.664612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.665005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.665013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.665223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.665231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.665494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.665501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.665809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.665817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.666135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.666143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.666485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.666492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.666682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.666690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.667071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.667078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.667264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.667271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.667661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.667674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.668017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.668024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.668355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.668362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.668666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.668682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.669008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.669017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.669374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.669383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.669705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.669714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.670051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.670058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.670390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.670401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.670727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.670734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.670904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.670912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.671235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.671244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.671569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.671576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.671903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.671911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.672281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.672288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.114 [2024-10-11 12:03:08.672692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.114 [2024-10-11 12:03:08.672701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.114 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.673037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.673045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.673373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.673381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.673702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.673710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.674100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.674109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.674431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.674438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.674817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.674826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.675144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.675151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.675368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.675376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.675717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.675724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.676041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.676049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.676282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.676289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.676614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.676621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.677033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.677040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.677257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.677265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.677589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.677597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.677980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.677987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.678313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.678321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.678679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.678688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.679027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.679034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.679258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.679266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.679590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.679597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.679779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.679787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.680179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.680186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.680546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.680553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.680748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.680756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.681100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.681107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.681414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.681422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.681745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.681753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.682076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.682084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.682411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.682418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.682721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.682728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.683066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.683073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.683294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.683301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.683622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.683629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.683929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.683937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.684296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.684303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.115 qpair failed and we were unable to recover it. 00:29:24.115 [2024-10-11 12:03:08.684606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.115 [2024-10-11 12:03:08.684613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.684951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.684959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.685263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.685271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.685592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.685599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.685923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.685931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.686265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.686274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.686463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.686472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.686826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.686834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.687149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.687157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.687476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.687483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.687791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.687799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.688133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.688140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.688362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.688370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.688721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.688729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.688956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.688964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.689319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.689326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.689681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.689689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.690008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.690016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.690341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.690349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.690510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.690519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.690833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.690841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.691164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.691172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.691490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.691497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.691842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.691850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.692136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.692146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.692466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.692473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.692796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.692803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.693097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.693104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.693429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.693436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.693757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.693765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.694093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.694100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.116 qpair failed and we were unable to recover it. 00:29:24.116 [2024-10-11 12:03:08.694416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.116 [2024-10-11 12:03:08.694424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.694757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.694764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.695064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.695072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.695262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.695271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.695594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.695603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.695936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.695944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.696455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.696464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.696792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.696799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.696995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.697002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.697382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.697390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.117 [2024-10-11 12:03:08.697732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.117 [2024-10-11 12:03:08.697740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.117 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.698069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.698079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.698314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.698323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.698656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.698663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.698969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.698976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.699299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.699306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.699625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.699633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.699977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.699986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.700305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.700313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.700637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.700644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.700846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.700857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.701214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.701224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.701546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.440 [2024-10-11 12:03:08.701553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.440 qpair failed and we were unable to recover it. 00:29:24.440 [2024-10-11 12:03:08.701868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.701876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.702201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.702209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.702537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.702545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.702886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.702894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.703220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.703227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.703557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.703565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.703896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.703904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.704225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.704233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.704553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.704562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.704852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.704861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.705189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.705198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.705562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.705571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.705739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.705747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.706098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.706106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.706428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.706436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.706740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.706748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.707126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.707134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.707454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.707461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.707791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.707799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.708134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.708141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.708467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.708474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.708800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.708807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.709135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.709143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.709461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.709469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.709789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.709800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.710055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.710063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.710389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.710397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.710737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.710745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.711095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.711102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.711400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.711408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.711731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.711739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.712113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.712120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.712427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.712435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.712738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.712746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.713078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.441 [2024-10-11 12:03:08.713086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.441 qpair failed and we were unable to recover it. 00:29:24.441 [2024-10-11 12:03:08.713402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.713410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.713809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.713816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.714138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.714146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.714470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.714479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.714809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.714817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.715219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.715228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.715545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.715552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.715729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.715738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.716060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.716068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.716390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.716397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.716570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.716578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.716846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.716854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.717188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.717195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.717427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.717435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.717774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.717782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.718081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.718088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.718416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.718423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.718763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.718771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.719098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.719105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.719432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.719439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.719735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.719742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.720070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.720078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.720277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.720285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.720596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.720603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.720927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.720935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.721257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.721264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.721575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.721582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.721894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.721901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.722218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.722226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.722600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.722607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.722926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.722936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.723144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.723151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.723488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.723496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.723823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.723830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.724133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.724140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.724467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.724474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.724829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.724842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.725211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.725218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.725543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.725551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.725899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.442 [2024-10-11 12:03:08.725907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.442 qpair failed and we were unable to recover it. 00:29:24.442 [2024-10-11 12:03:08.726214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.726221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.726538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.726547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.726783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.726791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.727161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.727169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.727569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.727578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.727910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.727918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.728238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.728246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.728574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.728583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.728906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.728915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.729313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.729322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.729634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.729643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.730010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.730019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.730341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.730349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.730674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.730683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.730968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.730976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.731297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.731305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.731633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.731642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.731919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.731930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.732252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.732261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.732576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.732584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.732899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.732908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.733226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.733234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.733442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.733450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.733768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.733775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.734103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.734110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.734430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.734438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.734700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.734707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.735017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.735025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.735363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.735370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.735692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.735700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.736020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.736027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.736316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.736324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.736525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.736534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.736886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.736894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.443 [2024-10-11 12:03:08.737189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.443 [2024-10-11 12:03:08.737197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.443 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.737480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.737488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.737683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.737691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.738019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.738027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.738347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.738354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.738666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.738678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.739050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.739057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.739357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.739365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.739687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.739695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.740033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.740040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.740360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.740370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.740585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.740592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.740905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.740914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.741263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.741270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.741585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.741592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.741811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.741820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.742032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.742039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.742378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.742386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.742710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.742718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.742888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.742896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.743175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.743183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.743521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.743528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.743851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.743858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.744260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.744267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.744597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.744605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.744787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.744796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.745001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.745008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.745388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.745397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.745570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.745579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.745920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.745928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.746245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.746253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.746576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.746584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.746956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.746964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.747293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.747301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.747622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.747630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.747923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.747932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.748132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.748140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.748325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.748335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.748527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.748535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.748842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.748850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.749160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.444 [2024-10-11 12:03:08.749167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.444 qpair failed and we were unable to recover it. 00:29:24.444 [2024-10-11 12:03:08.749507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.749514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.749838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.749846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.750167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.750175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.750501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.750509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.750835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.750842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.751165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.751172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.751483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.751490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.751817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.751825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.752159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.752166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.752496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.752504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.752826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.752834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.753152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.753160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.753531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.753539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.753735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.753743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.754115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.754123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.754467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.754475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.754800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.754808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.755132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.755139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.755458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.755465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.755691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.755699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.756003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.756010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.756238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.756245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.756584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.756593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.756827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.756835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.757174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.757182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.757540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.757548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.757896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.757904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.758244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.758253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.758575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.758583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.445 qpair failed and we were unable to recover it. 00:29:24.445 [2024-10-11 12:03:08.758911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.445 [2024-10-11 12:03:08.758920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.759110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.759119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.759322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.759333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.759663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.759679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.760093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.760102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.760397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.760405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.760746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.760754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.760965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.760973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.761304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.761314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.761635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.761643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.761985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.761994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.762208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.762216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.762559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.762566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.762899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.762906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.763235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.763245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.763568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.763576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.763872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.763881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.764209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.764217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.764612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.764619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.764922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.764930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.765248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.765256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.765595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.765603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.765943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.765951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.766320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.766328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.766491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.766498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.766821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.766829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.767168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.767175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.767502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.767510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.767854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.767861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.768255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.768262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.768588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.768596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.768904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.768911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.769238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.769246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.769568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.769576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.769802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.769811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.770135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.770144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.770465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.770473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.770797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.770804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.771128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.771135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.446 [2024-10-11 12:03:08.771497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.446 [2024-10-11 12:03:08.771505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.446 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.771707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.771716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.772028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.772035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.772356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.772364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.772565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.772572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.772927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.772934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.773118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.773127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.773456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.773464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.773781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.773789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.774205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.774213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.774522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.774530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.774866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.774874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.775180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.775188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.775503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.775510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.775751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.775759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.776100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.776107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.776281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.776289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.776697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.776705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.776924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.776932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.777293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.777302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.777651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.777659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.777909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.777916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.778237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.778245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.778589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.778597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.778829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.778838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.779053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.779062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.779374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.779381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.779717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.779725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.780034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.780042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.780366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.780374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.780465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.780474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.780763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.780771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.781091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.781100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.781436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.781444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.781660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.781672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.782034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.782041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.447 [2024-10-11 12:03:08.782361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.447 [2024-10-11 12:03:08.782368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.447 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.782533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.782541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.782856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.782864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.783191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.783199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.783521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.783529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.783746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.783754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.784049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.784058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.784426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.784433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.784641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.784648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.784870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.784877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.785197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.785206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.785507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.785515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.785803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.785811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.786145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.786153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.786482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.786490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.786792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.786800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.787045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.787053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.787152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.787158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.787502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.787510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.787885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.787894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.788289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.788297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.788612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.788621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.788983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.788991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.789299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.789307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.789674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.789682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.789970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.789977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.790304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.790312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.790641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.790649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.791051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.791062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.791315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.791323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.791640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.791649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.791975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.791984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.792306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.792314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.792639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.792647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.792987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.792996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.793323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.793332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.793652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.793660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.793970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.793979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.794323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.794332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.794663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.794676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.794994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.448 [2024-10-11 12:03:08.795003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.448 qpair failed and we were unable to recover it. 00:29:24.448 [2024-10-11 12:03:08.795329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.795337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.795545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.795554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.795609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.795618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.796030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.796038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.796354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.796362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.796573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.796581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.796837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.796848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.797125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.797133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.797564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.797572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.797783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.797791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.798164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.798171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.798386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.798393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.798717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.798725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.799061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.799069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.799398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.799408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.799735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.799743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.799931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.799939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.800263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.800270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.800475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.800483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.800745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.800753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.801023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.801030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.801335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.801343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.801663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.801705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.802123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.802132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.802448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.802455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.802752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.802760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.803094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.803101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.803316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.803324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.803498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.803506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.803706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.803715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.804084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.804091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.804437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.804446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.804791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.804799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.805107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.805114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.805439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.449 [2024-10-11 12:03:08.805446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.449 qpair failed and we were unable to recover it. 00:29:24.449 [2024-10-11 12:03:08.805748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.805756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.805978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.805986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.806315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.806322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.806672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.806679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.807045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.807054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.807371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.807378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.807684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.807697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.808011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.808018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.808313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.808321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.808641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.808648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.808965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.808973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.809189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.809197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.809514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.809523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.809697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.809706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.809997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.810004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.810333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.810340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.810664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.810678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.811000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.811007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.811336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.811343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.811644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.811652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.811867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.811875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.812206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.812214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.812442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.812449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.812738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.812746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.812965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.812973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.813149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.813157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.813499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.813507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.813830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.813838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.814175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.814182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.814478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.814486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.814862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.814870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.815175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.815182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.815504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.815512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.815837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.815845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.816166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.816174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.816529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.816536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.816840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.816848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.817061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.817069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.817372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.817379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.817704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.450 [2024-10-11 12:03:08.817712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.450 qpair failed and we were unable to recover it. 00:29:24.450 [2024-10-11 12:03:08.818052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.818059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.818383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.818390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.818584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.818591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.818888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.818896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.819214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.819221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.819528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.819536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.819889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.819896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.820203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.820210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.820561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.820568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.820949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.820957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.821284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.821291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.821617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.821625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.821822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.821829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.822161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.822168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.822492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.822499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.822839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.822846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.823166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.823174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.823490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.823498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.823826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.823833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.824152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.824160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.824455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.824462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.824786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.824794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.825021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.825029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.825360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.825367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.825687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.825696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.826054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.826060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.826380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.826387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.826582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.826589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.826888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.826896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.827066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.827074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.827351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.827358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.827678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.827686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.827993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.828000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.828327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.828335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.828655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.828665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.829078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.829086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.829409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.829416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.829608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.829616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.829991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.829999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.830171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.830178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.451 qpair failed and we were unable to recover it. 00:29:24.451 [2024-10-11 12:03:08.830470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.451 [2024-10-11 12:03:08.830477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.830792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.830800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.831147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.831155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.831485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.831493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.831813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.831820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.832140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.832147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.832520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.832527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.832844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.832851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.833193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.833200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.833523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.833530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.833842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.833851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.834179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.834188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.834505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.834512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.834846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.834853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.835188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.835197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.835509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.835518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.835716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.835725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.836038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.836045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.836368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.836375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.836693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.836701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.837040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.837047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.837374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.837383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.837688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.837697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.838038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.838045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.838374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.838382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.838705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.838713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.839097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.839104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.839428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.839435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.839761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.839768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.840078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.840086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.840405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.840412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.840742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.840750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.841070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.841078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.841401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.841409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.841748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.841758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.842088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.842096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.842384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.842392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.842731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.842738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.843142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.843149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.843470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.843477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.452 qpair failed and we were unable to recover it. 00:29:24.452 [2024-10-11 12:03:08.843796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.452 [2024-10-11 12:03:08.843804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.844010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.844017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.844189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.844196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.844520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.844528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.844707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.844715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.844834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.844841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.845151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.845158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.845467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.845474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.845801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.845809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.846176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.846184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.846492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.846499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.846932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.846939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.847282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.847290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.847545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.847552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.847871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.847878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.848205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.848212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.848537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.848545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.848846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.848854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.849158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.849166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.849435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.849443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.849628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.849637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.849976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.849986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.850347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.850355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.850709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.850718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.851048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.851056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.851375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.851383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.851731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.851739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.852051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.852059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.852378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.852385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.852688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.852696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.853032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.853039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.853397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.853406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.853748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.853755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.853963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.853971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.854204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.854213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.854532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.854541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.854840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.854848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.855047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.855055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.855381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.855389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.855717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.855726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.453 [2024-10-11 12:03:08.856033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.453 [2024-10-11 12:03:08.856041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.453 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.856388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.856397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.856602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.856609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.856803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.856812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.857020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.857029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.857375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.857384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.857715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.857723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.858032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.858041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.858262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.858270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.858649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.858660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.858896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.858905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.859243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.859252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.859552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.859560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.859892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.859902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.860201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.860208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.860336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.860343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.860564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.860572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.860819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.860827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.861199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.861209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.861558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.861566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.861864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.861873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.862183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.862191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.862515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.862523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.862934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.862942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.863155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.863163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.863497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.863505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.863712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.863721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.864057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.864065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.864365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.864373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.864658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.864665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.864985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.864996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.865323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.865331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.865640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.865649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.865887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.865895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.866232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.866241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.866576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.866585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.454 [2024-10-11 12:03:08.866805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.454 [2024-10-11 12:03:08.866817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.454 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.867117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.867124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.867463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.867471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.867689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.867697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.868048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.868055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.868246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.868255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.868579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.868588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.868887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.868895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.869075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.869084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.869477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.869485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.869774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.869783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.869981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.869989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.870320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.870328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.870555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.870563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.870846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.870856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.871187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.871195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.871517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.871525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.871877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.871885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.872213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.872221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.872542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.872551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.872907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.872914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.873238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.873246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.873570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.873580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.873890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.873899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.874249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.874257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.874590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.874599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.874888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.874897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.875237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.875249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.875474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.875484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.875760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.875769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.876152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.876160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.876484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.876492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.876817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.876826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.877207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.877215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.877599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.877608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.877921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.877930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.878236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.878244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.878565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.878572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.878868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.878877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.879203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.879211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.455 [2024-10-11 12:03:08.879536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.455 [2024-10-11 12:03:08.879543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.455 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.879783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.879792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.879989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.879998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.880349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.880357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.880685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.880692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.881025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.881033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.881376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.881384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.881707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.881716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.881974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.881983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.882287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.882295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.882631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.882641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.882951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.882960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.883340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.883348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.883682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.883691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.884035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.884043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.884383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.884393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.884716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.884724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.884946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.884953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.885241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.885249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.885561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.885570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.885760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.885769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.886048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.886057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.886390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.886397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.886809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.886819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.887130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.887139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.887444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.887453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.887800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.887809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.888141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.888150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.888481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.888490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.888711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.888720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.889083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.889090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.889344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.889352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.889705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.889714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.890079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.890088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.890280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.890290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.890633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.890641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.891029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.891077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.891289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.891297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.891642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.891650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.891996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.892005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.892307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.456 [2024-10-11 12:03:08.892315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.456 qpair failed and we were unable to recover it. 00:29:24.456 [2024-10-11 12:03:08.892596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.892603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.893038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.893047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.893368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.893375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.893699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.893708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.894095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.894103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.894383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.894391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.894718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.894725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.895036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.895045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.895373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.895385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.895701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.895710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.895859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.895869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.896047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.896054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.896388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.896396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.896732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.896741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.897085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.897095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.897415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.897423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.897740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.897748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.898077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.898085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.898405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.898413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.898739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.898747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.899073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.899081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.899259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.899268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.899478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.899487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.899816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.899825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.900158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.900166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.900493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.900502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.900825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.900834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.901138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.901147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.901467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.901475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.901801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.901810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.902132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.902139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.902476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.902484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.902808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.902816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.903153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.903162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.903483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.903493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.903729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.903738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.904053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.904061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.904275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.904283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.904625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.904634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.904937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.904945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.457 qpair failed and we were unable to recover it. 00:29:24.457 [2024-10-11 12:03:08.905273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.457 [2024-10-11 12:03:08.905281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.905613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.905623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.905869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.905877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.906234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.906242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.906543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.906552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.906770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.906780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.907105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.907112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.907436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.907444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.907784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.907792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.907993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.908002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.908332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.908342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.908662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.908676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.909081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.909090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.909418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.909429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.909616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.909625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.909809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.909817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.910158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.910166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.910379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.910386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.910617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.910624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.910956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.910964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.911272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.911281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.911600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.911608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.911838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.911847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.912148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.912156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.912505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.912514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.912832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.912841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.913164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.913172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.913499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.913508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.913821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.913830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.914035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.914044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.914404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.914412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.914746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.914754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.914979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.914986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.915269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.915277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.915457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.915465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.458 [2024-10-11 12:03:08.915797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.458 [2024-10-11 12:03:08.915805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.458 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.916126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.916133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.916442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.916450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.916765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.916772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.917097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.917106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.917464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.917471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.917779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.917787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.918168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.918177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.918507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.918514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.918845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.918853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.919180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.919187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.919401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.919409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.919737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.919745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.920080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.920089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.920402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.920410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.920705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.920714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.921050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.921057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.921387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.921397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.921715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.921734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.922045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.922053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.922374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.922381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.922555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.922563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.922924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.922933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.923244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.923251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.923572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.923580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.923903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.923910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.924213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.924223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.924548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.924556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.924860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.924868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.925192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.925200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.925527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.925535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.925825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.925833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.926134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.926142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.926472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.926480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.926797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.926813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.927020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.927028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.927356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.927364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.927671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.927679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.928006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.928014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.928417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.928426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.928739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.459 [2024-10-11 12:03:08.928746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.459 qpair failed and we were unable to recover it. 00:29:24.459 [2024-10-11 12:03:08.929078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.929086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.929453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.929460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.929752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.929761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.930099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.930107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.930425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.930433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.930751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.930759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.931063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.931071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.931393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.931401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.931613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.931620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.931968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.931976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.932302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.932310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.932630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.932638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.932930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.932938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.933144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.933153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.933401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.933409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.933733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.933741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.934068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.934076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.934284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.934291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.934609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.934624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.934879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.934888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.935269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.935281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.935604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.935612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.935949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.935956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.936281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.936289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.936608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.936615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.936998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.937006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.937343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.937351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.937665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.937692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.938032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.938039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.938362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.938371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.938690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.938699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.938924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.938932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.939124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.939133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.939455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.939463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.939768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.939777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.940097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.940106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.940422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.940430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.940739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.940747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.941127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.941135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.941463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.941471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.460 [2024-10-11 12:03:08.941792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.460 [2024-10-11 12:03:08.941799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.460 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.942134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.942141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.942462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.942470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.942683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.942693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.943018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.943026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.943355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.943363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.943681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.943688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.944008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.944020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.944334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.944341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.944654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.944661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.944849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.944858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.945196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.945203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.945397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.945405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.945783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.945792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.946124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.946133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.946448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.946457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.946824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.946832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.947167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.947176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.947494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.947503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.947692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.947701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.948008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.948015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.948226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.948234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.948554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.948561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.948766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.948774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.949110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.949117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.949443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.949451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.949738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.949745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.950069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.950077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.950411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.950419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.950746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.950754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.951052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.951060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.951380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.951389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.951715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.951724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.952049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.952057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.952378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.952386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.952587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.952595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.952876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.952885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.953059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.953067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.953343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.953351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.953677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.953685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.954011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.461 [2024-10-11 12:03:08.954020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.461 qpair failed and we were unable to recover it. 00:29:24.461 [2024-10-11 12:03:08.954339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.954347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.954571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.954580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.954890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.954898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.955222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.955230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.955407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.955418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.955767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.955776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.956079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.956087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.956264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.956273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.956619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.956627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.956953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.956962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.957276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.957284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.957604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.957612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.957960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.957968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.958183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.958191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.958568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.958576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.958909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.958919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.959229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.959236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.959449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.959457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.959678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.959686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.960046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.960057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.960376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.960384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.960589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.960597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.960892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.960900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.961222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.961230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.961554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.961563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.961881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.961890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.962201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.962210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.962575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.962583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.962878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.962887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.963227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.963236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.963552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.963560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.963902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.963911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.964231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.964240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.964571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.964579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.964910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.964923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.965237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.965245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.965559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.965569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.965782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.965793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.966150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.462 [2024-10-11 12:03:08.966159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.462 qpair failed and we were unable to recover it. 00:29:24.462 [2024-10-11 12:03:08.966482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.966490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.966814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.966823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.967139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.967148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.967480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.967487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.967879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.967886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.968221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.968229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.968432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.968440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.968746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.968754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.969084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.969091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.969413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.969422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.969741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.969749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.970076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.970083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.970409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.970416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.970743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.970751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.971052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.971060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.971397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.971406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.971720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.971727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.972038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.972047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.972359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.972366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.972676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.972684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.972891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.972901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.973233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.973241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.973541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.973551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.973828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.973836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.974154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.974162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.974482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.974490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.974799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.974808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.975139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.975147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.975445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.975453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.975777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.975785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.975955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.975964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.976289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.976298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.976622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.976629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.976822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.976831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.977131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.977139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.977514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.977523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.977931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.463 [2024-10-11 12:03:08.977938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.463 qpair failed and we were unable to recover it. 00:29:24.463 [2024-10-11 12:03:08.978238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.978247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.978579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.978587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.978907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.978916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.979241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.979248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.979565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.979573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.979865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.979873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.980180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.980188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.980509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.980515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.980856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.980864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.981196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.981204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.981526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.981534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.981848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.981856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.982184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.982192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.982514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.982522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.982841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.982850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.983092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.983100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.983309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.983316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.983587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.983594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.983932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.983939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.984263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.984271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.984588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.984597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.984912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.984922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.985215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.985223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.985546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.985555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.985878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.985888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.986190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.986199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.986519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.986527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.986847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.986855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.987176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.987184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.987506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.987513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.987837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.987845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.988179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.988187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.988397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.988406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.988681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.988690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.989011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.989019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.989360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.989367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.989683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.989691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.990025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.990034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.990356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.990364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.990682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.990689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.464 qpair failed and we were unable to recover it. 00:29:24.464 [2024-10-11 12:03:08.991014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.464 [2024-10-11 12:03:08.991022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.991340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.991350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.991749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.991764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.992079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.992086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.992390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.992398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.992716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.992724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.993091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.993098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.993417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.993424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.993751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.993759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.994091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.994100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.994329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.994336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.994700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.994710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.995045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.995053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.995344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.995355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.995555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.995563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.995892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.995899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.996205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.996213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.996530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.996537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.996989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.996998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.997403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.997411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.997732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.997740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.997932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.997939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.998223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.998232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.998565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.998572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.998935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.998945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.999301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.999308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:08.999630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:08.999638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.000042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.000050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.000347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.000355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.000566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.000574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.000896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.000904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.001224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.001231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.001645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.001654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.001977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.001985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.002303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.002311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.002678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.002687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.003075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.003085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.003408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.003415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.003629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.003637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.003980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.003989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.465 [2024-10-11 12:03:09.004289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.465 [2024-10-11 12:03:09.004300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.465 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.004624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.004633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.004939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.004947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.005270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.005278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.005598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.005610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.005816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.005824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.006141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.006149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.006484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.006491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.006720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.006728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.007050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.007059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.007272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.007279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.007682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.007690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.008019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.008026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.008350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.008358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.008650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.008658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.008993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.009000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.009329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.009337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.009680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.009689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.010084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.010094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.010418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.010426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.010743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.010753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.011080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.011087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.011406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.011414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.011740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.011748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.012070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.012079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.012401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.012409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.012739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.012748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.012964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.012974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.013178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.013184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.013461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.013469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.013651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.013659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.013992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.013999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.014331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.014338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.014666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.014689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.014997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.015004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.015321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.015330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.015928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.015953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.016305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.016314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.016681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.016689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.016993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.466 [2024-10-11 12:03:09.017002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.466 qpair failed and we were unable to recover it. 00:29:24.466 [2024-10-11 12:03:09.017327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.017335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.017738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.017747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.018071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.018079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.018381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.018390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.018709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.018717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.019038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.019056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.019377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.019384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.019710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.019718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.020043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.020051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.020360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.020368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.020596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.020605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.021002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.021011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.021302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.021310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.021664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.021682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.022624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.022661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.023108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.023120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.023435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.023443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.023738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.023749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.024072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.024081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.024262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.024274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.024606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.024615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.024955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.024963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.025288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.025295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.025612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.025621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.025968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.025976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.026280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.026287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.026606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.026614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.026958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.026965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.027288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.027297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.027614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.027622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.027918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.027926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.028144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.028151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.028493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.028501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.028830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.028838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.029174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.467 [2024-10-11 12:03:09.029182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.467 qpair failed and we were unable to recover it. 00:29:24.467 [2024-10-11 12:03:09.029504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.029511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.029815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.029823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.030137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.030147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.030473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.030483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.030767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.030775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.031110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.031118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.031440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.031447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.031742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.031750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.032080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.032087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.032407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.032414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.032747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.032756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.033077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.033085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.033400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.033408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.033777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.033788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.034100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.034108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.034427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.034434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.034783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.034792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.034990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.035000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.035174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.035183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.035495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.035503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.035728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.035739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.036047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.036055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.036392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.036400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.036730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.036738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.037050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.037059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.037382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.037389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.468 [2024-10-11 12:03:09.037716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.468 [2024-10-11 12:03:09.037725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.468 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.038061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.038072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.038393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.038402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.038771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.038779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.039125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.039133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.039462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.039470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.039784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.039792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.040113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.040120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.040443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.040451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.040785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.040794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.041133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.041142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.041463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.041473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.041560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.041568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.041794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.041803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.042019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.042027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.042387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.042395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.042714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.042724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.043052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.043060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.043398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.043406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.043726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.043734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.044059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.044066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.045154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.045194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.045552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.045560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.045790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.045798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.046176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.046185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.046548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.046557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.046863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.046871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.047201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.047208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.047528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.047536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.047726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.047735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.048103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.048112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.048285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.746 [2024-10-11 12:03:09.048295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.746 qpair failed and we were unable to recover it. 00:29:24.746 [2024-10-11 12:03:09.048644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.048653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.049048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.049056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.049366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.049374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.049709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.049717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.050049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.050057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.050374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.050382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.050892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.050913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.051213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.051221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.051541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.051549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.051872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.051881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.052200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.052207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.052524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.052533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.052853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.052861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.053083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.053091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.053425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.053433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.053702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.053710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.053916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.053924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.054251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.054260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.054479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.054486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.054821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.054830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.055149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.055156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.055475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.055486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.055694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.055703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.056019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.056028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.056330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.056337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.056534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.056541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.056871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.056879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.057095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.057104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.057429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.057436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.057743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.057750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.058102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.058109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.058417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.058425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.058752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.058760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.059077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.059085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.059419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.059427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.059748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.747 [2024-10-11 12:03:09.059757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.747 qpair failed and we were unable to recover it. 00:29:24.747 [2024-10-11 12:03:09.060065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.060073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.060378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.060386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.060713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.060721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.061051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.061058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.061376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.061385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.061710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.061719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.062049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.062057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.062279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.062288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.062629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.062637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.063041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.063051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.063366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.063376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.063617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.063625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.063930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.063939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.064263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.064271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.064460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.064468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.064790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.064798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.065130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.065138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.065458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.065466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.065791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.065799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.066200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.066208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.066550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.066557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.066847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.066859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.067184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.067192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.067509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.067517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.067854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.067863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.068183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.068191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.068440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.068448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.068660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.068674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.068855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.068864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.069199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.069206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.069537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.069545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.069853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.069861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.070178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.070186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.070514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.070522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.070851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.070858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.071258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.071266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.071587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.071595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.071914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.071923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.748 [2024-10-11 12:03:09.072244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.748 [2024-10-11 12:03:09.072253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.748 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.072572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.072579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.072907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.072915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.073233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.073240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.073562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.073569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.073902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.073910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.074216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.074225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.074541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.074549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.074952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.074960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.075340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.075347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.075688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.075701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.076031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.076040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.076364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.076372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.076704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.076711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.076997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.077004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.077341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.077349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.077574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.077582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.077939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.077947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.078311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.078318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.078647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.078655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.078982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.078991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.079310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.079317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.079621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.079628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.079960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.079969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.080284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.080292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.080609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.080617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.080961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.080969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.081287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.081296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.081678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.081688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.082006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.082015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.082261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.082268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.082580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.082588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.082913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.082921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.083249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.083256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.083567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.083574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.083900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.083907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.084238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.084246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.084300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.084310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.084664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.084677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.749 [2024-10-11 12:03:09.084991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.749 [2024-10-11 12:03:09.084999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.749 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.085314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.085322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.085643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.085650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.085970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.085978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.086350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.086357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.086662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.086675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.086970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.086978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.087289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.087305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.087664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.087681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.087992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.088002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.088324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.088333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.088653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.088661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.089062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.089071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.089401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.089409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.089745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.089755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.090075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.090082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.090381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.090389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.090710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.090718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.091110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.091117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.091454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.091462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.091789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.091797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.092121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.092129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.092442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.092449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.092790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.092799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.093142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.093149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.093458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.093468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.093791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.093800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1202945 Killed "${NVMF_APP[@]}" "$@" 00:29:24.750 [2024-10-11 12:03:09.094118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.094128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.094489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.094498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 [2024-10-11 12:03:09.094736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.094746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:24.750 [2024-10-11 12:03:09.095054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.095063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:24.750 [2024-10-11 12:03:09.095402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.750 [2024-10-11 12:03:09.095412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.750 qpair failed and we were unable to recover it. 00:29:24.750 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:24.751 [2024-10-11 12:03:09.095746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.751 [2024-10-11 12:03:09.095756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.751 [2024-10-11 12:03:09.095987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.095998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.096348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.096355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.096678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.096687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.096883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.096896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.097223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.097231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.097549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.097557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.097774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.097782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.098122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.098130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.098493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.098500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.098826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.098834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.099141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.099149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.099469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.099478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.099814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.099823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.100140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.100150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.100361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.100369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.100753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.100764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.101089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.101100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.101417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.101425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.101644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.101654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.101989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.101998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.102319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.102327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.102651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.102658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.102920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.102929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.103235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.103242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.104171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1204082 00:29:24.751 [2024-10-11 12:03:09.104205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1204082 00:29:24.751 [2024-10-11 12:03:09.104563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.104575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1204082 ']' 00:29:24.751 [2024-10-11 12:03:09.104924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.104937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.751 [2024-10-11 12:03:09.105248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.105260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.751 [2024-10-11 12:03:09.105611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.105624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.751 [2024-10-11 12:03:09.105950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.105963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.751 [2024-10-11 12:03:09.106164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.106177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.106372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.751 [2024-10-11 12:03:09.106383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.751 qpair failed and we were unable to recover it. 00:29:24.751 [2024-10-11 12:03:09.106704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.106714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.107217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.107236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.107579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.107589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.107929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.107939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.108262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.108272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.108606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.108616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.108925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.108937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.109246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.109262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.109595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.109605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.109935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.109948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.110286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.110294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.110624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.110633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.110851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.110861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.111074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.111082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.111209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.111220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.111722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.111830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.112190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.112228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.112597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.112628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.112887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.112897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.113237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.113244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.113578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.113589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.113925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.113935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.114248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.114256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.114588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.114595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.114933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.114942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.115159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.115168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.115372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.115381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.115624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.115632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.115931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.115941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.116287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.116295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.116612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.116621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.116944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.116954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.117292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.117300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.117611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.117619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.117991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.118004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.118200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.118210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.118554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.118566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.118887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.118896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.119221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.119230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.752 [2024-10-11 12:03:09.119557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.752 [2024-10-11 12:03:09.119569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.752 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.119890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.119899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.120150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.120158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.120339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.120348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.120691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.120700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.121145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.121153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.121459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.121468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.121686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.121694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.121975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.121983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.122304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.122312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.122633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.122642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.122934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.122943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.123350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.123360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.123695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.123704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.124125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.124135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.124525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.124533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.124739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.124748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.125020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.125028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.125363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.125372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.125720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.125730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.125956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.125963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.126154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.126166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.126366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.126375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.126706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.126716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.127008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.127017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.127334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.127342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.127685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.127694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.128035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.128044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.128339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.128346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.128678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.128686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.129070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.129078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.129272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.129279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.129645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.129653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.129987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.130000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.130352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.130360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.130762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.130771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.131083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.753 [2024-10-11 12:03:09.131093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.753 qpair failed and we were unable to recover it. 00:29:24.753 [2024-10-11 12:03:09.131417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.131425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.131749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.131757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.132055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.132062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.132391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.132400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.132618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.132626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.132997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.133005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.133373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.133381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.133562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.133571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.133831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.133839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.134079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.134088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.134318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.134328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.134719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.134728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.135060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.135069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.135407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.135415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.135731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.135739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.135961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.135968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.136255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.136263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.136611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.136619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.136963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.136971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.137319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.137327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.137657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.137674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.138011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.138020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.138342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.138350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.138680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.138691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.139017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.139027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.139359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.139367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.139702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.139714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.140100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.140108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.140309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.140324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.140662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.140676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.140880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.140887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.141178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.141186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.141507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.141514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.141852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.141860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.142077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.142084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.142365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.142373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.142453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.142460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.142743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.142753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.143088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.143096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.143430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.143438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.754 [2024-10-11 12:03:09.143783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.754 [2024-10-11 12:03:09.143791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.754 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.144215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.144224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.144405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.144414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.144616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.144625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.144957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.144965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.145322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.145334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.145518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.145526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.145940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.145949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.146304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.146312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.146648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.146657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.147097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.147105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.147438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.147446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.147777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.147785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.148121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.148130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.148457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.148465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.148799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.148807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.149137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.149144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.149471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.149479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.149812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.149822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.150163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.150172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.150415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.150423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.150740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.150748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.151096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.151104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.151459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.151467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.151826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.151835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.152189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.152197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.152392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.152399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.152778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.152786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.153195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.153203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.153396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.153405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.153739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.153748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.153985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.153993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.755 [2024-10-11 12:03:09.154353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.755 [2024-10-11 12:03:09.154361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.755 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.154690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.154699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.154936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.154944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.155296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.155304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.155654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.155661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.156015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.156024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.156210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.156219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.156592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.156600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.156905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.156915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.157099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.157108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.157461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.157468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.157777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.157786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.158008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.158015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.158192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.158199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.158658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.158666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.158981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.158989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.159318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.159326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.159636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.159644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.159977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.159985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.160332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.160339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.160680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.160689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.161014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.161023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.161362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.161371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.161446] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:29:24.756 [2024-10-11 12:03:09.161514] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.756 [2024-10-11 12:03:09.161688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.161701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.162118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.162126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.162466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.162475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.162657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.162673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.162995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.163004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.163354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.163365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.163902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.163961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.164302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.164313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.164647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.164658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.165003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.165015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.165339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.165350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.165683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.165701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.166049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.166060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.166387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.166400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.166609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.166618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.166979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.166989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.167330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.756 [2024-10-11 12:03:09.167340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.756 qpair failed and we were unable to recover it. 00:29:24.756 [2024-10-11 12:03:09.167688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.167697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.168071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.168080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.168403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.168412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.168903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.168962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.169343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.169354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.169706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.169716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.170107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.170117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.170300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.170310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.170646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.170656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.170984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.170994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.171331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.171341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.171556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.171566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.171748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.171759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.171958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.171968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.172304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.172314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.172639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.172649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.172956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.172967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.173298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.173307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.173631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.173640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.173942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.173952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.174273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.174282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.174484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.174497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.174823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.174832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.175149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.175158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.175497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.175506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.175730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.175739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.176100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.176109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.176429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.176438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.176790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.176799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.177022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.177031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.177379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.177390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.177744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.177753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.178057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.178066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.178258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.178268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.178591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.178601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.178934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.178944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.179272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.179283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.179647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.179656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.179955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.179964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.180310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.180320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.180640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.757 [2024-10-11 12:03:09.180649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.757 qpair failed and we were unable to recover it. 00:29:24.757 [2024-10-11 12:03:09.180894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.180903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.181097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.181107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.181440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.181448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.181738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.181748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.182084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.182092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.182439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.182447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.182679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.182688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.182876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.182884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.183189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.183198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.183541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.183550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.183767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.183776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.184160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.184169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.184362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.184370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.184604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.184615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.184970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.184979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.185153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.185162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.185560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.185569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.185913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.185922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.186257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.186265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.186687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.186697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.187048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.187056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.187385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.187397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.187734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.187744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.188086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.188094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.188292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.188300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.188638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.188647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.188974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.188983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.189305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.189313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.189635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.189643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.189975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.189984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.190279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.190288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.190600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.190608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.190980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.190990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.191302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.191311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.191680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.191691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.192022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.192032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.192359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.192368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.192678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.192688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.192993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.193001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.193325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.758 [2024-10-11 12:03:09.193333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.758 qpair failed and we were unable to recover it. 00:29:24.758 [2024-10-11 12:03:09.193452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.193460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.193788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.193798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.194115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.194124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.194448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.194456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.194777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.194789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.195101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.195110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.195429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.195437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.195760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.195770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.196143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.196155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.196551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.196561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.196834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.196844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.197195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.197204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.197530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.197540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.197866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.197875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.198220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.198227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.198541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.198549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.198844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.198853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.199174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.199183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.199505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.199513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.199838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.199847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.200179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.200187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.200519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.200527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.200847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.200855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.201180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.201188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.201514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.201523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.201844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.201853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.202054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.202062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.202392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.202401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.202634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.202646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.202954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.202962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.203300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.203309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.203698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.203707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.204050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.204059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.204400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.204408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.204700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.204712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.759 [2024-10-11 12:03:09.205052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.759 [2024-10-11 12:03:09.205062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.759 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.205363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.205371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.205701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.205710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.205910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.205918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.206089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.206098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.206453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.206460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.206792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.206800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.207145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.207153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.207455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.207462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.207790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.207799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.208130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.208139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.208371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.208378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.208737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.208753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.209102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.209110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.209411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.209420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.209759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.209767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.210087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.210095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.210415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.210423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.210740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.210749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.211075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.211083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.211315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.211322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.211527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.211535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.211831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.211839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.212201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.212209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.212406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.212414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.212754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.212762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.213060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.213070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.213124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.213134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.213436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.213444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.213753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.213762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.214092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.214099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.214404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.214412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.214735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.214744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.215048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.215057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.215382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.215390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.215691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.215699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.216057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.216066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.216369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.216377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.216656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.216663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.216997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.217006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.217215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.760 [2024-10-11 12:03:09.217226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.760 qpair failed and we were unable to recover it. 00:29:24.760 [2024-10-11 12:03:09.217447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.217455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.217661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.217680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.218037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.218047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.218274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.218282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.218605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.218612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.218948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.218956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.219313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.219320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.219648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.219655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.219991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.219999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.220331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.220339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.220677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.220685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.220968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.220975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.221280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.221288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.221588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.221596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.221927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.221935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.222119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.222127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.222461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.222469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.222692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.222700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.223051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.223061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.223354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.223362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.223716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.223725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.224048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.224056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.224422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.224431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.224765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.224774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.225123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.225132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.225456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.225464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.225757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.225765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.225983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.226001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.226331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.226339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.226686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.226694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.227046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.227053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.227274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.227282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.227650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.227657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.228023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.228032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.228375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.228382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.228683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.228692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.229049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.229057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.229344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.229352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.229674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.229683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.229968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.229977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.761 qpair failed and we were unable to recover it. 00:29:24.761 [2024-10-11 12:03:09.230335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.761 [2024-10-11 12:03:09.230344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.230664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.230686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.231008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.231017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.231348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.231357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.231572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.231580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.231904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.231912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.232091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.232100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.232371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.232379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.232679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.232688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.232976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.232985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.233290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.233297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.233632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.233642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.233932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.233940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.234299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.234306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.234607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.234619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.234949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.234957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.235241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.235250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.235567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.235575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.235877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.235885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.236118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.236127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.236474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.236482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.236815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.236823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.237182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.237190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.237519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.237527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.237740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.237748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.238082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.238090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.238451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.238459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.238742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.238750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.238981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.238990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.239161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.239172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.239502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.239510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.239820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.239829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.239999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.240009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.240330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.240338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.240537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.240544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.240892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.240900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.241228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.241236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.241463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.241471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.241660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.241672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.241960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.241968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.242312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.762 [2024-10-11 12:03:09.242320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.762 qpair failed and we were unable to recover it. 00:29:24.762 [2024-10-11 12:03:09.242438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.242449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.242725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.242734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.243037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.243045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.243299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.243307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.243639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.243648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.243991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.244001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.244330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.244338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.244640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.244647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.244848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.244857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.245158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.245166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.245467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.245477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.245810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.245820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.246141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.246150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.246495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.246503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.246822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.246831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.247169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.247178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.247379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.247386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.247742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.247751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.248141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.248149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.248481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.248490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.248817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.248825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.249156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.249165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.249375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.249385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.249562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.249570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.249888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.249897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.250094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.250102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.250405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.250414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.250740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.250749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.250976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.250984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.251345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.251352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.251685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.251694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.251993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.252002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.252334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.252343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.252534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.252543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.252833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.252842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.253175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.253184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.253513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.253521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.253801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.253810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.254137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.254145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.254471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.254479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.763 qpair failed and we were unable to recover it. 00:29:24.763 [2024-10-11 12:03:09.254784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.763 [2024-10-11 12:03:09.254792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.254904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.764 [2024-10-11 12:03:09.255076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.255087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.255290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.255298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.255694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.255704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.256021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.256029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.256363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.256372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.256680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.256689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.256983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.256991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.257294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.257301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.257607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.257615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.257809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.257818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.258134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.258143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.258357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.258365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.258711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.258719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.259012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.259023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.259323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.259332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.259645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.259654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.260037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.260045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.260348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.260357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.260690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.260699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.261022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.261031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.261357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.261365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.261565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.261574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.261959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.261968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.262158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.262168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.262495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.262503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.262611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.262618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.262909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.262916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.263204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.263212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.263515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.263523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.263836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.263845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.264122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.264130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.264309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.264317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.264632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.264642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.264973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.264982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.764 [2024-10-11 12:03:09.265263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.764 [2024-10-11 12:03:09.265271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.764 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.265594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.265602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.265934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.265943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.266270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.266278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.266458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.266467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.266639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.266647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.267021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.267030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.267364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.267372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.267576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.267585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.267777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.267786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.268135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.268144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.268439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.268450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.268625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.268634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.268998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.269007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.269222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.269232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.269413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.269422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.269642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.269651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.269987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.269996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.270332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.270340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.270564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.270572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.270759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.270768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.271160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.271168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.271402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.271411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.271750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.271758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.272059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.272066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.272416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.272424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.272832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.272840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.273177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.273187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.273516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.273524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.273843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.273852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.274251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.274259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.274565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.274574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.274868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.274877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.275193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.275204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.275590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.275598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.275988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.275997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.276332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.276341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.276540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.276548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.276854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.276862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.277222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.277230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.277407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.765 [2024-10-11 12:03:09.277416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.765 qpair failed and we were unable to recover it. 00:29:24.765 [2024-10-11 12:03:09.277766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.277774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.278079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.278087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.278408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.278415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.278748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.278756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.279078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.279086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.279410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.279418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.279729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.279739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.280062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.280071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.280401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.280411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.280748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.280757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.281056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.281065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.281391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.281401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.281685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.281696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.281895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.281902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.282145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.282153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.282449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.282458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.282793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.282803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.283153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.283162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.283578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.283586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.283906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.283915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.284263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.284274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.284589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.284600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.284841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.284849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.285190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.285199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.285500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.285508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.285840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.285848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.286183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.286191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.286523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.286532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.286841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.286850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.287179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.287187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.287424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.287433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.287628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.287638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.287935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.287944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.288161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.288172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.288494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.288502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.288848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.288856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.289173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.289183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.289407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.289416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.289780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.289791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.290119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.290128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.766 qpair failed and we were unable to recover it. 00:29:24.766 [2024-10-11 12:03:09.290451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.766 [2024-10-11 12:03:09.290460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.290785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.290793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.291013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.291021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.291383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.291391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.291719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.291728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.292070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.292077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.292378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.292387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.292739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.292747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.293072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.293080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.293277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.293287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.293579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.293589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.293774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.293782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.294157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.294166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.294361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.294368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.294705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.294714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.295011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.295019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.295352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.295361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.295683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.295692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.296000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.296007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.296221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.296229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.296588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.296595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.296838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.296846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.297058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.297068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.297428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.297436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.297675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.297683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.297988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.297997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.298330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.298339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.298663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.298676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.298940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.298948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.299214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.299222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.299551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.299558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.299890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.299900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.300238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.300246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.300591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.300599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.300898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.300908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.301253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.301261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.301594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.301602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.301921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.301929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.302261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.302269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.302610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.302617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.302907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.767 [2024-10-11 12:03:09.302915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.767 qpair failed and we were unable to recover it. 00:29:24.767 [2024-10-11 12:03:09.303244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.303253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.303429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.303438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.303656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.303676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.303944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.303952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.304243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.304251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.304588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.304596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.304926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.304935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.305172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.305179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.305476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.305484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.305815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.305825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.306121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.306130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.306446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.306455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.306773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.306783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.306982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.306991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.307318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.307328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.307560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.307568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.307906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.307916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.308105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.308116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.308473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.308482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.308781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.308790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.309113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.309123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.309426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.309434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.309429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.768 [2024-10-11 12:03:09.309480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.768 [2024-10-11 12:03:09.309488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.768 [2024-10-11 12:03:09.309496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.768 [2024-10-11 12:03:09.309502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.768 [2024-10-11 12:03:09.309762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.309772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.310089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.310096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.310430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.310438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.310767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.310775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.311014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.311022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.311365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.311374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.311680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.311688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.311568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:24.768 [2024-10-11 12:03:09.311734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:24.768 [2024-10-11 12:03:09.311894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.311904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.311840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:24.768 [2024-10-11 12:03:09.311842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:24.768 [2024-10-11 12:03:09.312211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.312222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.312565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.768 [2024-10-11 12:03:09.312574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.768 qpair failed and we were unable to recover it. 00:29:24.768 [2024-10-11 12:03:09.312902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.312911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.313112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.313121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.313523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.313534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.313869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.313879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.314227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.314236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.314538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.314547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.314895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.314904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.315110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.315119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.315308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.315316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.315652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.315659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.315912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.315920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.316280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.316288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.316498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.316507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.316816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.316823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.317180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.317189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.317495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.317504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.317812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.317821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.318044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.318052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.318394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.318402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.318629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.318638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.318857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.318867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.319229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.319238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.319630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.319639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.319860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.319870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.320235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.320244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.320590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.320597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.320887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.320896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.321233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.321242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.321576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.321584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.321799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.321807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.322162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.322170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.322389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.322397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.322730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.322740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.323088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.323097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.323286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.323294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.323628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.323637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.323944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.323953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.324172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.324180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.324382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.324389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.324584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.769 [2024-10-11 12:03:09.324598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.769 qpair failed and we were unable to recover it. 00:29:24.769 [2024-10-11 12:03:09.324834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.324844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.325149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.325158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.325500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.325509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.325843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.325852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.326186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.326194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.326414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.326423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.326764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.326773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.327058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.327066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.327357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.327365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.327533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.327542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.327857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.327866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.328218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.328226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.328444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.328451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.328645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.328652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.329011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.329022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.329328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.329336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.329694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.329703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.330030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.330038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.330222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.330231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.330586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.330594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.330932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.330940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.331141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.331148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.331438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.331446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.331653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.331662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.331785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.331793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.331970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.331979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.332320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.332332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.332665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.332679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.332988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.332996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.333329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.333337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.333654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.333663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.334012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.334028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.334374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.334383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.334677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.334687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.335023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.335032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.335344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.335352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.335676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.335685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.336106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.336114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.336437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.336445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.336736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.336745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.770 [2024-10-11 12:03:09.337138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.770 [2024-10-11 12:03:09.337148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.770 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.337323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.337332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.337696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.337706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.337859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.337867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.338185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.338194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.338471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.338480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.338838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.338848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.339184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.339192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.339378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.339387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.339718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.339728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.340081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.340089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.340286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.340295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.340475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.340483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.340796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.340808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.341145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.341154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.341358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.341366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.341695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.341703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.341885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.341893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.342161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.342170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.342361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.342371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.342708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.342717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.343004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.343013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.343308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.343317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.343538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.343545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.343867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.343875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.344060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.344069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.344279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.344287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.344645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.344653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.344974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.344985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.345307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.345318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.345640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.345650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.345858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.345866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.346061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.346069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.346397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.346406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.346607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.346616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.346813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.346823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.347174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.347183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.347486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.347493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.347828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.347836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.348136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.348145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.348471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.348480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.771 qpair failed and we were unable to recover it. 00:29:24.771 [2024-10-11 12:03:09.348785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.771 [2024-10-11 12:03:09.348794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.349140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.349148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.349481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.349490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.349709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.349719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.350034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.350042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.350377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.350388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.350716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.350726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.351017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.351027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.351364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.351372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.351656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.351665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.351861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.351870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.352164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.352173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.352486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.352495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.352818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.352827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.353154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.353163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.353483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.353493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.353676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.353685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.354039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.354050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.354229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.354237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.354529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.354537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.354831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.354841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.355136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.355144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.355473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.355481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.355692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.355701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.356009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.356016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.356325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.356333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.356537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.356545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.356775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.356784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.356975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.356983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.357303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.357312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.357641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.357650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.357970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.357979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.358306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.358313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.358595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.358604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.358845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.358853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.359047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.772 [2024-10-11 12:03:09.359055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.772 qpair failed and we were unable to recover it. 00:29:24.772 [2024-10-11 12:03:09.359402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.773 [2024-10-11 12:03:09.359410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.773 qpair failed and we were unable to recover it. 00:29:24.773 [2024-10-11 12:03:09.359691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.773 [2024-10-11 12:03:09.359699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.773 qpair failed and we were unable to recover it. 00:29:24.773 [2024-10-11 12:03:09.360085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.773 [2024-10-11 12:03:09.360093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.773 qpair failed and we were unable to recover it. 00:29:24.773 [2024-10-11 12:03:09.360312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.773 [2024-10-11 12:03:09.360319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.773 qpair failed and we were unable to recover it. 00:29:24.773 [2024-10-11 12:03:09.360700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.773 [2024-10-11 12:03:09.360715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.773 qpair failed and we were unable to recover it. 00:29:24.773 [2024-10-11 12:03:09.361012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.773 [2024-10-11 12:03:09.361020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.773 qpair failed and we were unable to recover it. 00:29:24.773 [2024-10-11 12:03:09.361363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.773 [2024-10-11 12:03:09.361373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.773 qpair failed and we were unable to recover it. 00:29:24.773 [2024-10-11 12:03:09.361695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.773 [2024-10-11 12:03:09.361705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.773 qpair failed and we were unable to recover it. 00:29:24.773 [2024-10-11 12:03:09.361902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.773 [2024-10-11 12:03:09.361910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:24.773 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.362104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.362114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.362381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.362393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.362699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.362709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.363072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.363080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.363399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.363408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.363717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.363725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.363925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.363935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.364273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.364281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.364494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.364502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.364834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.364843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.365105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.365112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.365312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.365321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.365665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.365699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.365874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.365885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.366213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.366222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.366425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.366433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.366757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.366766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.367194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.367202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.367555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.367564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.367847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.367857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.368180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.368188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.368489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.368498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.368686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.368698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.368897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.368907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.369244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.369253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.369456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.369464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.369652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.053 [2024-10-11 12:03:09.369660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.053 qpair failed and we were unable to recover it. 00:29:25.053 [2024-10-11 12:03:09.370022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.370030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.370363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.370372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.370566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.370574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.370882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.370889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.371066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.371074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.371411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.371419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.371737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.371746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.372064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.372075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.372251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.372259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.372439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.372447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.372499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.372508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.372814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.372822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.373027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.373036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.373364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.373373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.373564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.373572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.373866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.373876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.374198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.374206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.374404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.374413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.374740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.374748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.375065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.375073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.375398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.375407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.375643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.375651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.376005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.376013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.376355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.376363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.376545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.376554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.376743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.376752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.377104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.377112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.377292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.377300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.377496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.377505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.377725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.377734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.378041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.378049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.378339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.378347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.378568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.378577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.378889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.378897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.379218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.379226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.379500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.379509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.379766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.379774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.379971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.379979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.380370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.380379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.380709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.380720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.054 [2024-10-11 12:03:09.381092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.054 [2024-10-11 12:03:09.381100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.054 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.381294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.381302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.381631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.381639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.381920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.381930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.382288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.382297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.382641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.382648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.382987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.382996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.383315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.383324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.383517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.383528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.383742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.383751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.384091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.384101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.384468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.384477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.384698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.384706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.385004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.385014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.385293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.385301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.385653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.385662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.386010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.386019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.386192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.386202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.386501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.386508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.386837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.386845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.387165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.387173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.387502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.387509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.387845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.387853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.388266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.388276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.388577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.388584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.388886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.388894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.389203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.389212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.389404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.389412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.389755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.389762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.390104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.390111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.390482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.390490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.390793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.390802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.391125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.391134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.391452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.391461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.391748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.391755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.392083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.392092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.392384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.392392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.392618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.392625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.392925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.392933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.393154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.393162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.055 qpair failed and we were unable to recover it. 00:29:25.055 [2024-10-11 12:03:09.393393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.055 [2024-10-11 12:03:09.393401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.393608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.393619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.393808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.393817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.394173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.394181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.394350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.394357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.394677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.394685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.395040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.395047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.395330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.395337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.395686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.395695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.396086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.396093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.396473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.396483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.396837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.396845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.397156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.397164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.397385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.397393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.397726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.397735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.398090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.398098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.398423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.398432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.398766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.398774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.399177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.399184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.399493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.399501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.399831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.399839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.400161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.400169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.400495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.400502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.400803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.400811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.401014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.401023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.401081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.401089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.401426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.401434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.401624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.401631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.401931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.401942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.402247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.402256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.402546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.402554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.402892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.402900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.403108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.403115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.403438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.403445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.403766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.403775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.404057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.404066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.404255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.404263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.404481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.404492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.404796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.404804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.404868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.404875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.056 [2024-10-11 12:03:09.405231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.056 [2024-10-11 12:03:09.405239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.056 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.405477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.405485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.405804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.405812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.406047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.406055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.406258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.406266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.406623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.406630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.406936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.406945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.407236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.407243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.407528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.407535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.407844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.407852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.408183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.408190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.408470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.408477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.408785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.408793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.409007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.409014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.409352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.409362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.409660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.409677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.410026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.410034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.410313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.410321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.410539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.410549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.410861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.410870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.411212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.411219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.411539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.411547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.411934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.411942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.412141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.412148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.412506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.412514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.412833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.412841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.413166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.413175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.413491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.413499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.413826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.413834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.414012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.414020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.414355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.414364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.414705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.414713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.414826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.414832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.415186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.415193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.415413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.415422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.057 qpair failed and we were unable to recover it. 00:29:25.057 [2024-10-11 12:03:09.415550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.057 [2024-10-11 12:03:09.415561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.415857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.415866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.416211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.416219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.416404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.416412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.416590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.416597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.416913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.416922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.417117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.417126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.417431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.417440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.417615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.417624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.417811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.417822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.418139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.418148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.418485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.418492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.418815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.418830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.419148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.419156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.419200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.419207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.419412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.419420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.419597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.419605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.419918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.419929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.420147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.420154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.420355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.420363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.420458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.420466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.420799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.420807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.421158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.421165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.421345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.421353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.421555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.421562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.421768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.421776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.422125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.422138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.422449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.422457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.422786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.422794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.423163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.423171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.423348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.423358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.423680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.423688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.423970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.423978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.424197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.424205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.424522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.424529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.424875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.424882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.425118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.425128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.425308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.425316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.425677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.425686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.426032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.426042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.426348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.058 [2024-10-11 12:03:09.426356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.058 qpair failed and we were unable to recover it. 00:29:25.058 [2024-10-11 12:03:09.426701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.426711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.426900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.426908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.427114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.427124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.427453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.427462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.427742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.427751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.427833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.427841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.428113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.428120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.428445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.428452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.428823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.428831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.429180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.429187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.429359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.429367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.429529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.429536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.429836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.429847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.430201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.430209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.430487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.430495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.430742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.430750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.431028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.431039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.431242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.431249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.431542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.431549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.431740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.431749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.432022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.432029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.432361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.432369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.432715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.432723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.432938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.432945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.433292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.433300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.433629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.433636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.433818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.433826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.434060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.434068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.434412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.434420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.434752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.434761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.435128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.435136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.435468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.435475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.435651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.435659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.435945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.435953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.436153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.436161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.436511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.436518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.436845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.436853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.437197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.437204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.437513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.437521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.437719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.437729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.059 qpair failed and we were unable to recover it. 00:29:25.059 [2024-10-11 12:03:09.438076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.059 [2024-10-11 12:03:09.438083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.438444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.438451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.438785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.438792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.439079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.439086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.439457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.439465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.439660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.439672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.439966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.439974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.440294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.440303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.440352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.440360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.440516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.440523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.440702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.440714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.441019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.441026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.441370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.441378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.441704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.441711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.442010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.442018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.442356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.442363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.442538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.442544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.442860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.442869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.443053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.443064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.443382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.443392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.443726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.443734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.444039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.444047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.444365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.444372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.444567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.444576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.444862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.444871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.445088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.445096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.445440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.445449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.445809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.445818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.446058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.446066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.446329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.446337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.446685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.446692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.446900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.446907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.447249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.447256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.447588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.447595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.447922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.447930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.448264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.448272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.448505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.448515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.448690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.448701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.449038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.449047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.449242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.449250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.060 [2024-10-11 12:03:09.449431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.060 [2024-10-11 12:03:09.449438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.060 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.449759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.449766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.450080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.450088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.450457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.450465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.450786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.450796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.451098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.451106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.451403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.451411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.451701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.451708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.451922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.451930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.452121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.452129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.452442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.452450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.452647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.452656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.452986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.452993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.453044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.453050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.453240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.453247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.453539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.453548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.453919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.453927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.454242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.454250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.454586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.454593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.454931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.454939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.455143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.455152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.455488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.455496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.455834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.455842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.456136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.456144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.456504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.456511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.456854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.456862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.457062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.457069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.457374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.457382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.457745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.457753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.458110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.458118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.458341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.458348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.458533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.458542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.458901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.458912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.459218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.459228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.459535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.459543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.459720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.459727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.460065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.460072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.460349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.460357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.460682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.061 [2024-10-11 12:03:09.460689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.061 qpair failed and we were unable to recover it. 00:29:25.061 [2024-10-11 12:03:09.460887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.460895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.461106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.461114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.461404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.461412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.461737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.461745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.462098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.462105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.462156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.462162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.462472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.462480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.462843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.462851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.463052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.463060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.463392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.463401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.463751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.463759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.464112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.464120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.464396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.464404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.464741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.464749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.465173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.465181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.465507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.465514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.465848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.465856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.466209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.466216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.466402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.466410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.466723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.466733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.467051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.467059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.467407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.467415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.467766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.467774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.467947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.467954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.468006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.468014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.468450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.468457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.468780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.468790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.469116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.469123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.469325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.469332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.469688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.469699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.470012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.470020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.470180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.470190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.470392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.470399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.470798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.470806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.471128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.471135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.471450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.471458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.471783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.471792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.471993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.062 [2024-10-11 12:03:09.472000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.062 qpair failed and we were unable to recover it. 00:29:25.062 [2024-10-11 12:03:09.472369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.472377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.472425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.472433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.472810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.472818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.473103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.473111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.473349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.473356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.473688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.473696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.474040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.474047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.474342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.474350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.474679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.474686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.475043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.475051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.475416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.475423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.475735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.475743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.476096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.476103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.476441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.476449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.476806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.476815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.477143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.477151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.477471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.477480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.477805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.477812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.478146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.478154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.478353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.478362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.478651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.478658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.478978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.478986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.479350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.479358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.479681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.479689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.480052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.480059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.480245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.480252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.480430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.480438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.480754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.480761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.481098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.481106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.481285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.481294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.481657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.481665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.481972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.481979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.482262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.482270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.482516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.482524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.482854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.482862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.483209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.483217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.483569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.483577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.483909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.483916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.484246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.484254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.484578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.484586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.063 [2024-10-11 12:03:09.484892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.063 [2024-10-11 12:03:09.484899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.063 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.485200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.485207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.485554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.485561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.485899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.485907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.486098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.486106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.486395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.486402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.486706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.486714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.486944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.486952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.487301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.487309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.487589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.487600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.488010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.488017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.488292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.488300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.488645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.488654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.489013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.489022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.489373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.489381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.489702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.489711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.489940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.489948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.490213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.490220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.490528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.490536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.490817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.490824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.491144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.491152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.491435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.491442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.491823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.491832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.492161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.492168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.492400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.492408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.492601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.492619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.492827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.492835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.493088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.493097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.493292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.493301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.493656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.493665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.493889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.493897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.494215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.494222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.494553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.494561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.494905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.494914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.495238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.495246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.495588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.495597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.495838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.495851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.496198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.496207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.496379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.496388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.496716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.496726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.497024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.064 [2024-10-11 12:03:09.497031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.064 qpair failed and we were unable to recover it. 00:29:25.064 [2024-10-11 12:03:09.497266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.497273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.497609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.497618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.497804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.497812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.498042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.498050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.498255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.498264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.498557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.498565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.498848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.498856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.499236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.499245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.499464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.499471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.499826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.499834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.500028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.500035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.500277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.500285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.500561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.500569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.500922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.500930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.501293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.501301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.501600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.501607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.501849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.501857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.502147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.502155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.502524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.502533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.502728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.502737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.502912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.502920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.503129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.503136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.503539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.503547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.503914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.503923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.504265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.504273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.504607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.504617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.504818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.504827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.505135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.505144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.505483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.505491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.505804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.505813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.506014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.506021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.506384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.506393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.506741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.506750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.507087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.507096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.507423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.507431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.507603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.507611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.507663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.507676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.507862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.507869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.508052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.508060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.508394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.508405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.065 [2024-10-11 12:03:09.508743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.065 [2024-10-11 12:03:09.508751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.065 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.509095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.509102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.509311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.509319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.509680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.509689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.510021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.510029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.510228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.510237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.510575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.510584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.510814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.510825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.511195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.511204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.511256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.511265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.511550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.511560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.511907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.511918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.512140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.512148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.512328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.512337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.512537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.512559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.512891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.512899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.513232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.513240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.513545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.513554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.513734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.513742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.514024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.514032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.514377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.514388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.514692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.514704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.515054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.515063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.515343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.515354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.515553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.515562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.515891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.515901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.516247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.516255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.516589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.516597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.516916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.516925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.517212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.517220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.517576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.517584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.517787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.517796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.518097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.518106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.518501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.518510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.518803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.518812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.519006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.519015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.066 [2024-10-11 12:03:09.519312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.066 [2024-10-11 12:03:09.519320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.066 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.519653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.519662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.519995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.520004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.520327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.520335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.520510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.520518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.520837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.520845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.521186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.521195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.521516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.521525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.521837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.521847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.522162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.522169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.522527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.522536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.522840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.522849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.523152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.523159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.523484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.523492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.523838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.523849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.524164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.524172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.524353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.524362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.524724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.524733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.525058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.525067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.525431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.525439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.525830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.525839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.526157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.526166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.526496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.526505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.526878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.526887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.527221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.527228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.527553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.527562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.527888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.527897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.528206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.528215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.528411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.528420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.528595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.528603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.528889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.528898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.529189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.529197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.529564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.529572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.529927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.529936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.530261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.530270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.530591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.530599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.530895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.530905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.531243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.067 [2024-10-11 12:03:09.531252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.067 qpair failed and we were unable to recover it. 00:29:25.067 [2024-10-11 12:03:09.531577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.531585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.531790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.531799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.532142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.532151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.532458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.532470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.532650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.532660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.533062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.533070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.533252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.533260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.533616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.533624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.533954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.533962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.534135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.534146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.534475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.534483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.534845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.534855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.535148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.535156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.535483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.535492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.535781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.535789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.536129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.536136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.536315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.536322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.536647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.536655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.536850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.536859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.537149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.537157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.537503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.537512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.537796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.537805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.538036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.538046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.538371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.538379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.538686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.538695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.539032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.539041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.539225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.539234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.539588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.539597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.539922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.539930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.540286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.540297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.540602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.540611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.540828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.540837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.541163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.541171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.541479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.541487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.541815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.541827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.542152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.542162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.068 [2024-10-11 12:03:09.542331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.068 [2024-10-11 12:03:09.542338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.068 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.542698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.542707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.542998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.543007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.543358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.543365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.543677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.543686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.543917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.543929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.544103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.544109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.544307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.544315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.544675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.544685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.545008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.545016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.545350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.545358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.545664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.545678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.546022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.546031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.546346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.546354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.546644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.546653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.547036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.547044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.547215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.547224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.547538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.547546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.547841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.547851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.548202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.548211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.548530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.548538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.548835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.548842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.549160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.549168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.549488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.549496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.549826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.549835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.550164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.550171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.550508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.550516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.550837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.550845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.551194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.551204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.551403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.551411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.551706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.551715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.552074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.552084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.552423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.552430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.552785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.552796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.553161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.553170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.553522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.553534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.553889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.553898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.554198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.554208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.069 [2024-10-11 12:03:09.554540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.069 [2024-10-11 12:03:09.554551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.069 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.554886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.554909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.555257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.555265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.555454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.555462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.555740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.555749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.555934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.555942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.556123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.556134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.556472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.556484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.556835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.556843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.557065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.557073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.557360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.557370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.557713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.557722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.558053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.558061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.558248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.558259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.558588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.558596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.558898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.558907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.559078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.559088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.559272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.559282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.559565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.559573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.559757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.559767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.560139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.560147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.560449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.560457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.560790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.560798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.561119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.561126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.561324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.561335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.561691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.561702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.561928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.561936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.562259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.562268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.562610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.562618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.562774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.562783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.563082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.563090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.563281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.563289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.563533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.563541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.563740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.563749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.564072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.070 [2024-10-11 12:03:09.564080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.070 qpair failed and we were unable to recover it. 00:29:25.070 [2024-10-11 12:03:09.564298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.564306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.564621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.564630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.564985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.564993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.565120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.565128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.565326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.565335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.565615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.565625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.565983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.565992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.566277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.566286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.566568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.566578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.566894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.566908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.567187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.567198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.567552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.567561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.567910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.567919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.568274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.568283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.568627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.568636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.568963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.568973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.569311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.569319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.569641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.569652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.570004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.570014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.570201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.570212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.570539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.570548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.570759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.570769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.571109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.571119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.571483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.571493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.571814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.571822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.571998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.572006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.572237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.572246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.572577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.572587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.572924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.572934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.573112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.573121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.573560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.573567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.573864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.573872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.574220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.574228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.574572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.574580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.071 [2024-10-11 12:03:09.574764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.071 [2024-10-11 12:03:09.574773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.071 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.575097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.575105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.575438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.575447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.575760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.575769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.576069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.576077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.576411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.576420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.576747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.576756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.577123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.577131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.577487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.577495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.577694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.577701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.578007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.578014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.578187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.578195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.578381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.578389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.578714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.578723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.579081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.579089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.579294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.579303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.579581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.579593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.579791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.579801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.579989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.579999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.580166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.580176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.580503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.580511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.580697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.580706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.581053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.581061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.581372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.581385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.581718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.581726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.582113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.582124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.582357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.582366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.582780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.582789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.583107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.583115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.583418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.583426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.583778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.583789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.584140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.584148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.584451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.584459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.584791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.584801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.585110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.585121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.585349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.585358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.585434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.585442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.585756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.585764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.586062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.072 [2024-10-11 12:03:09.586071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.072 qpair failed and we were unable to recover it. 00:29:25.072 [2024-10-11 12:03:09.586422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.586430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.586732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.586740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.587068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.587077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.587412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.587420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.587735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.587744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.588043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.588052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.588373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.588381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.588702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.588711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.589039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.589048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.589378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.589387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.589434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.589440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.589771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.589783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.590109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.590118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.590305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.590313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.590653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.590662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.591018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.591027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.591376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.591385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.591688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.591697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.592021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.592034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.592259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.592267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.592472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.592480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.592647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.592656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.593028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.593037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.593362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.593371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.593701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.593710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.593812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.593819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.594110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.594128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.594438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.594446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.594754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.594764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.594993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.595002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.595332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.595340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.595510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.595519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.595837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.595848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.596206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.596214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.596574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.596583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.596820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.596829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.597154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.597164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.597483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.597492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.597587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.597597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.073 [2024-10-11 12:03:09.597785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.073 [2024-10-11 12:03:09.597794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.073 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.598080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.598089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.598426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.598433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.598763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.598771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.599071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.599079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.599274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.599285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.599566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.599574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.599760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.599768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.600074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.600082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.600360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.600368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.600685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.600694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.601030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.601037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.601388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.601398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.601722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.601730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.602026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.602034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.602346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.602354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.602683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.602691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.603031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.603039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.603365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.603373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.603683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.603692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.603891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.603901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.604116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.604126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.604492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.604502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.604826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.604834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.605165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.605173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.605482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.605491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.605808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.605817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.606133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.606141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.606449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.606457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.606655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.606665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.607019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.607027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.607199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.607208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.607492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.607500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.607837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.607846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.608189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.608197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.608502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.608510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.608804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.608813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.609174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.609182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.609520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.074 [2024-10-11 12:03:09.609528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.074 qpair failed and we were unable to recover it. 00:29:25.074 [2024-10-11 12:03:09.609873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.609882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.610134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.610146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.610302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.610310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.610614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.610622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.611025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.611037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.611340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.611349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.611594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.611602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.611957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.611966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.612158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.612169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.612520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.612529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.612874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.612882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.613230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.613238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.613587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.613595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.613779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.613788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.614035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.614044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.614239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.614247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.614505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10249a0 is same with the state(6) to be set 00:29:25.075 [2024-10-11 12:03:09.615280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.615391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.615767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.615809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.616077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.616113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.616512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.616543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6b4000b90 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.616819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.616831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.617194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.617205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.617541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.617552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.617626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.617634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.617858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.617867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.618093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.618103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.618323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.618332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.618633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.618643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.619015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.619025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.619256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.619265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.619473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.619482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.619538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.619547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.075 qpair failed and we were unable to recover it. 00:29:25.075 [2024-10-11 12:03:09.619745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.075 [2024-10-11 12:03:09.619756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.619986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.619994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.620236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.620245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.620563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.620571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.620806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.620815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.621035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.621046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.621406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.621415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.621740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.621750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.621923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.621932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.622246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.622258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.622612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.622622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.622961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.622971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.623284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.623293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.623489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.623498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.623609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.623617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.623917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.623926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.624115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.624123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.624482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.624491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.624698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.624707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.624930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.624939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.625285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.625295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.625498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.625507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.625795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.625805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.625989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.625999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.626161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.626171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.626361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.626371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.626729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.626741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.627056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.627065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.627386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.627394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.627724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.627734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.628087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.628096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.628448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.628460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.628821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.628832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.629031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.629041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.629253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.629263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.629477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.629486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.629813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.629824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.630133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.630144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.630512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.630521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.630859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.630869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.631153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.076 [2024-10-11 12:03:09.631164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.076 qpair failed and we were unable to recover it. 00:29:25.076 [2024-10-11 12:03:09.631518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.631528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.631890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.631899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.632152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.632160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.632523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.632531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.632841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.632849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.633206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.633216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.633525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.633534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.633838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.633847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.634213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.634221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.634504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.634512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.634816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.634827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.635191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.635201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.635287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.635294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.635642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.635650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.635878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.635886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.636243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.636251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.636447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.636455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.636654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.636662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.636868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.636877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.637230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.637240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.637535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.637543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.637757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.637765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.637969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.637979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.638322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.638329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.638534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.638542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.638851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.638859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.639207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.639215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.639513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.639521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.639836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.639844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.640056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.640064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.640248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.640257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.640614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.640623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.640831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.640838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.641179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.641188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.641524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.641532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.641839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.641848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.642207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.642219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.642527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.642536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.642834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.642843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.643196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.077 [2024-10-11 12:03:09.643204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.077 qpair failed and we were unable to recover it. 00:29:25.077 [2024-10-11 12:03:09.643525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.643532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.643846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.643854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.644030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.644037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.644242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.644254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.644464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.644474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.644780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.644802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.644988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.644997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.645398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.645407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.645751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.645760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.646076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.646084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.646376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.646386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.646730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.646739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.647142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.647150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.647471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.647479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.647817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.647826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.648003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.648011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.648368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.648377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.648738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.648746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.649066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.649076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.649397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.649405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.649794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.649803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.649978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.649986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.650185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.650193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.650377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.650389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.650555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.650563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.650857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.650865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.651228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.651236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.651545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.651555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.651773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.651781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.652024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.652032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.652133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.652141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.652492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.652501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.652751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.652759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.653068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.653075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.653421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.653431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.653816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.653825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.654164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.654172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.654472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.654480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.654775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.654784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.655172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.655180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.655351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.078 [2024-10-11 12:03:09.655359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.078 qpair failed and we were unable to recover it. 00:29:25.078 [2024-10-11 12:03:09.655719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.655731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.656094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.656103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.656433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.656441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.656770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.656779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.657088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.657098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.657392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.657400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.657789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.657800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.658126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.658138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.658448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.658456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.658680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.658694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.659036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.659043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.659348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.659356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.659692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.659701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.660015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.660023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.660223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.660233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.660436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.660443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.660628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.660636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.660822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.660831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.661034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.661046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.661359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.661367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.661694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.661702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.662017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.662024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.662378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.662388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.662592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.662599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.662928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.662939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.663143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.663152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.663355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.663367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.663705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.663713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.664027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.664036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.664357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.664365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.664561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.664571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.664773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.664781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.665117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.665129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.665436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.665444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.665739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.665748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.666078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.666086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.079 qpair failed and we were unable to recover it. 00:29:25.079 [2024-10-11 12:03:09.666407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.079 [2024-10-11 12:03:09.666417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.080 qpair failed and we were unable to recover it. 00:29:25.080 [2024-10-11 12:03:09.666688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-10-11 12:03:09.666697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.080 qpair failed and we were unable to recover it. 00:29:25.080 [2024-10-11 12:03:09.666907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-10-11 12:03:09.666915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.080 qpair failed and we were unable to recover it. 00:29:25.080 [2024-10-11 12:03:09.667096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-10-11 12:03:09.667108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.080 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.667418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.667429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.667616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.667626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.667805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.667814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.668168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.668177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.668466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.668475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.668762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.668770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.669090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.669098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.669284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.669293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.669480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.669489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.669794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.669803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.670152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.670160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.670497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.356 [2024-10-11 12:03:09.670508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.356 qpair failed and we were unable to recover it. 00:29:25.356 [2024-10-11 12:03:09.670833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.670841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.671223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.671231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.671286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.671293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.671436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.671445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.671642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.671650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.671877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.671885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.672234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.672243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.672442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.672452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.672803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.672812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.673180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.673188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.673383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.673391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.673569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.673583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.673797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.673808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.673863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.673871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.674155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.674164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.674403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.674411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.674635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.674642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.675010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.675019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.675414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.675429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.675760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.675769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.676103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.676111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.676330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.676339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.676698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.676708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.677028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.677037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.677343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.677351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.677683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.677691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.678055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.678065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.678380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.678388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.678679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.678690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.679014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.679023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.679206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.679216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.679547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.679555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.679858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.679867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.680062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.680070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.680418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.680425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.680618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.680628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.680920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.680930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.681285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.681292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.357 [2024-10-11 12:03:09.681598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.357 [2024-10-11 12:03:09.681606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.357 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.681795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.681803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.682121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.682129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.682350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.682358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.682703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.682713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.682764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.682770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.683057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.683066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.683415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.683424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.683744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.683753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.683994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.684002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.684341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.684349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.684626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.684633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.684971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.684978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.685326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.685336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.685694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.685707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.685932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.685941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.686272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.686279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.686586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.686593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.686922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.686931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.687233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.687241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.687570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.687580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.687761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.687770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.688157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.688166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.688486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.688495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.688799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.688808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.689137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.689145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.689452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.689459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.689796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.689807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.690124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.690133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.690461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.690469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.690765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.690773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.690960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.690968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.691249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.691256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.691614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.691625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.691972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.691981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.692316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.692324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.692611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.692619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.692970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.692978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.693287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.693296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.693640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.693649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.358 qpair failed and we were unable to recover it. 00:29:25.358 [2024-10-11 12:03:09.694015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.358 [2024-10-11 12:03:09.694025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.694383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.694397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.694450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.694458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.694766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.694775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.694971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.694979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.695387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.695395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.695730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.695738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.696054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.696062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.696395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.696406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.696749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.696759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.697083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.697091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.697284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.697292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.697532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.697540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.697723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.697732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.698023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.698030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.698111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.698118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.698416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.698427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.698662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.698678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.698889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.698898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.699247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.699256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.699574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.699582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.699947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.699956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.700300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.700308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.700619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.700631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.700962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.700974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.701308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.701317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.701628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.701637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.701841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.701852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.702175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.702183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.702544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.702553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.702912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.702921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.703096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.703106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.703512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.703520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.703883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.703893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.704101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.704110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.704437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.704446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.704686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.704694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.704930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.359 [2024-10-11 12:03:09.704939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.359 qpair failed and we were unable to recover it. 00:29:25.359 [2024-10-11 12:03:09.705267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.705275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.705575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.705585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.705880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.705889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.706067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.706076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.706427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.706438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.706832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.706843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.707191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.707199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.707530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.707540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.707642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.707651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.707950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.707960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.708284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.708293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.708479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.708487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.708836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.708847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.709166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.709174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.709505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.709513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.709837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.709848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.710206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.710215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.710539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.710547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.710736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.710749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.710961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.710970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.711303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.711312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.711611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.711619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.711925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.711933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.712266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.712275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.712444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.712454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.712777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.712786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.712995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.713005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.713286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.713295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.713616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.713625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.713981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.713991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.714297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.714305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.714627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.714641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.714843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.714853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.715047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.715057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.715227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.715237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.715581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.715590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.715795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.715805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.715974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.715983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.716173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.716182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.360 [2024-10-11 12:03:09.716538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.360 [2024-10-11 12:03:09.716549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.360 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.716863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.716872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.717178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.717187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.717241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.717250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.717523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.717531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.717839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.717848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.718194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.718202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.718522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.718532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.718844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.718853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.719174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.719182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.719507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.719515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.719736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.719745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.719965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.719974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.720174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.720182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.720380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.720389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.720611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.720619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.720808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.720816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.721110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.721120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.721298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.721309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.721642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.721656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.721976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.721984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.722293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.722300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.722477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.722484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.722842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.722851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.723218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.723227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.723528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.723538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.723838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.723847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.724129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.724136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.724185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.724193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.724483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.724492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.724819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.724828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.724984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.724992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.725299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.725308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.361 qpair failed and we were unable to recover it. 00:29:25.361 [2024-10-11 12:03:09.725510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.361 [2024-10-11 12:03:09.725521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.725855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.725864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.726194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.726202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.726485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.726494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.726802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.726810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.726989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.726998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.727372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.727379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.727693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.727704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.728015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.728024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.728400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.728407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.728741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.728749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.729001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.729010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.729338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.729346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.729648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.729660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.730015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.730025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.730328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.730337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.730578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.730586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.730924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.730945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.731276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.731284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.731622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.731630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.731961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.731969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.732151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.732160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.732508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.732518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.732839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.732848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.733046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.733054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.733241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.733249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.733678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.733687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.734002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.734010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.734346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.734354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.734716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.734727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.735026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.735034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.735365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.735373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.735652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.735660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.735961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.735970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.736137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.736144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.736342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.736350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.736693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.736704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.737044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.737052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.737334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.737343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.737655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.362 [2024-10-11 12:03:09.737663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.362 qpair failed and we were unable to recover it. 00:29:25.362 [2024-10-11 12:03:09.738019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.738027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.738378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.738386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.738697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.738705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.739070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.739079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.739365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.739375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.739736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.739744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.740065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.740072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.740408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.740416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.740546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.740553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.740878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.740887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.741122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.741129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.741484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.741494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.741565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.741573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.741902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.741912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.742245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.742254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.742572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.742581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.742758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.742769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.743012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.743020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.743383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.743393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.743747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.743757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.743947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.743955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.744294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.744303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.744609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.744616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.744934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.744942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.745254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.745262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.745612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.745623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.745960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.745968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.746248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.746256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.746619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.746628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.746796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.746804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.747113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.747121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.747487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.747495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.747832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.747839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.747923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.747931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.748267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.748275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.748617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.748625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.748814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.748822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.749119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.749126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.749424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.749432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.749659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.749673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.363 qpair failed and we were unable to recover it. 00:29:25.363 [2024-10-11 12:03:09.749963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.363 [2024-10-11 12:03:09.749972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.750212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.750225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.750570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.750579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.750882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.750892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.751212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.751220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.751402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.751411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.751759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.751768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.751959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.751966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.752150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.752158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.752492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.752502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.752819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.752828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.752880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.752888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.753063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.753071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.753374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.753382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.753699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.753707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.754092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.754100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.754404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.754414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.754734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.754743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.755098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.755106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.755435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.755443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.755804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.755812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.756115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.756122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.756324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.756332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.756656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.756663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.756976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.756994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.757173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.757180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.757538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.757547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.757629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.757638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.757921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.757932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.758272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.758279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.758479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.758487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.758665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.758680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.758971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.758978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.759158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.759165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.759454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.759465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.759752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.759760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.759966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.759973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.760371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.760378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.760760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.760770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.760949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.760958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.761247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.364 [2024-10-11 12:03:09.761255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.364 qpair failed and we were unable to recover it. 00:29:25.364 [2024-10-11 12:03:09.761546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.761555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.761851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.761860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.762127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.762135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.762475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.762483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.762567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.762573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.762866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.762874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.763217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.763224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.763428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.763437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.763801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.763813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.764131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.764141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.764467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.764474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.764777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.764785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.765096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.765104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.765284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.765292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.765587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.765596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.765951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.765959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.766307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.766316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.766639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.766649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.766980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.766990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.767333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.767341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.767634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.767642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.767815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.767825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.768043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.768052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.768333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.768343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.768700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.768710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.769048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.769055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.769253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.769260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.769485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.769493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.769696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.769706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.770011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.770019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.770321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.770330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.770660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.770674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.771000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.771007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.771334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.771341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.365 qpair failed and we were unable to recover it. 00:29:25.365 [2024-10-11 12:03:09.771674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.365 [2024-10-11 12:03:09.771682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.772012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.772020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.772344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.772352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.772680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.772689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.772966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.772974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.773328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.773336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.773696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.773705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.774032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.774040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.774344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.774352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.774664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.774679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.775015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.775025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.775378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.775387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.775744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.775752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.776090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.776097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.776445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.776452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.776769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.776778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.777060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.777068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.777387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.777397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.777626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.777635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.777963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.777971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.778144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.778152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.778500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.778510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.778830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.778838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.779020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.779030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Read completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 Write completed with error (sct=0, sc=8) 00:29:25.366 starting I/O failed 00:29:25.366 [2024-10-11 12:03:09.779800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:25.366 [2024-10-11 12:03:09.780178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.780245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6ac000b90 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.780495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.780506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.780722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.780730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.781095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.781103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.781425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.366 [2024-10-11 12:03:09.781434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.366 qpair failed and we were unable to recover it. 00:29:25.366 [2024-10-11 12:03:09.781785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.781794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.782158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.782166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.782357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.782365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.782718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.782726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.782915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.782923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.783248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.783257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.783566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.783573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.783897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.783907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.784268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.784276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.784465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.784474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.784779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.784788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.784974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.784986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.785185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.785197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.785540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.785549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.785892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.785901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.786211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.786222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.786497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.786508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.786790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.786801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.786983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.786991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.787198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.787205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.787596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.787605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.787896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.787904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.788105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.788114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.788472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.788483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.788706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.788718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.788950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.788958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.789137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.789146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.789320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.789327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.789534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.789542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.789858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.789868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.790235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.790244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.790547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.790557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.790896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.790906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.791270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.791278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.791470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.791478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.791816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.791825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.792021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.792031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.792388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.792395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.792562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.792569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.792983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.792993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.367 [2024-10-11 12:03:09.793315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.367 [2024-10-11 12:03:09.793325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.367 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.793633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.793642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.793829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.793840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.794020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.794028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.794232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.794243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.794553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.794562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.794859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.794867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.795218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.795226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.795452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.795462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.795795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.795805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.796150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.796159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.796448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.796455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.796784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.796792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.797177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.797185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.797492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.797503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.797677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.797686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.797967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.797976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.798303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.798312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.798655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.798664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.798999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.799008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.799290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.799299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.799624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.799634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.799917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.799926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.800240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.800248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.800442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.800450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.800820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.800828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.801186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.801194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.801498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.801508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.801765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.801775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.802134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.802142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.802194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.802201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.802505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.802514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.802846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.802855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.803168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.803175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.803497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.803504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.803722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.803730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.804035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.804044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.804396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.804405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.804715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.804728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.804908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.804915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.805194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.368 [2024-10-11 12:03:09.805205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.368 qpair failed and we were unable to recover it. 00:29:25.368 [2024-10-11 12:03:09.805561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.805569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.805778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.805785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.805984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.805992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.806203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.806211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.806428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.806436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.806759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.806770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.807081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.807092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.807419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.807428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.807751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.807761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.808072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.808080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.808414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.808422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.808735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.808747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.808963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.808975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.809282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.809290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.809652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.809659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.809992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.810002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.810318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.810326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.810655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.810663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.810971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.810980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.811288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.811298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.811629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.811638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.811878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.811889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.812105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.812114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.812446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.812455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.812677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.812686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.813010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.813019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.813363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.813374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.813585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.813598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.813767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.813775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.814070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.814080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.814430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.814439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.814607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.814616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.814927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.814935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.815263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.815271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.815574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.815583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.815892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.815900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.816189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.816197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.816472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.369 [2024-10-11 12:03:09.816481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.369 qpair failed and we were unable to recover it. 00:29:25.369 [2024-10-11 12:03:09.816813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.816822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.817012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.817020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.817354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.817362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.817548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.817556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.817722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.817732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.818064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.818075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.818400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.818409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.818737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.818746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.819053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.819064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.819395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.819404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.819736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.819745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.819979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.819992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.820302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.820311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.820673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.820683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.820991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.821000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.821176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.821190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.821377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.821388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.821654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.821663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.821961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.821974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.822281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.822293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.822520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.822528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.822720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.822729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.823030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.823038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.823233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.823242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.823443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.823451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.823788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.823800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.824122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.824131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.824313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.824323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.824681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.824690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.825056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.825064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.825291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.825299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.825650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.825659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.826369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.826380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.826702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.826711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.827035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.827044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.827404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.827411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.827739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.827746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.370 qpair failed and we were unable to recover it. 00:29:25.370 [2024-10-11 12:03:09.827941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.370 [2024-10-11 12:03:09.827948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.828232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.828240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.828580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.828587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.828636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.828642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.828962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.828970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.829307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.829316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.829522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.829530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.829783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.829792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.829979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.829989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.830318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.830326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.830495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.830505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.830815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.830824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.831183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.831191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.831370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.831378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.831679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.831690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.831859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.831867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.832214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.832223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.832418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.832426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.832711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.832719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.832890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.832902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.833190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.833198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.833374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.833382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.833726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.833737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.833944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.833953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.834281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.834290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.834486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.834493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.834839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.834847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.835188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.835196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.835398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.835406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.835750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.835761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.835937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.835950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.836429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.836539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a8000b90 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.837098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.837204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe6a8000b90 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.837613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.837624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.837830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.837839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.838059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.838067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.838408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.838418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.838725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.838734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.839094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.839102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.839292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.839300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.371 qpair failed and we were unable to recover it. 00:29:25.371 [2024-10-11 12:03:09.839478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.371 [2024-10-11 12:03:09.839486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.839792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.839800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.840145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.840153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.840504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.840514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.840853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.840863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.841191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.841199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.841531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.841541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.841882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.841891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.841979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.841985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.842142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.842150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.842498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.842506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.842841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.842851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.843090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.843097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.843449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.843457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.843779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.843787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.843966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.843977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.844275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.844284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.844506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.844515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.844834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.844845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.845034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.845043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.845398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.845407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.845634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.845643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.845850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.845859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.846154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.846163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.846347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.846357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.846692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.846701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.847008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.847017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.847325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.847337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.847656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.847666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.848039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.848048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.848366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.848375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.848725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.848734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.849083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.849092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.849418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.849431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.849754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.849765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.850133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.850143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.850346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.850354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.850695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.850704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.851064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.851073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.851277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.851286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.851626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.372 [2024-10-11 12:03:09.851636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.372 qpair failed and we were unable to recover it. 00:29:25.372 [2024-10-11 12:03:09.851971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.851981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.852181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.852191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.852547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.852556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.852743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.852752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.853103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.853112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.853427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.853436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.853773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.853785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.854043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.854053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.854380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.854393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.854602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.854611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.854812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.854822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.855158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.855166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.855524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.855532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.855749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.855758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.856089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.856098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.856155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.856163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.856505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.856516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.856847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.856858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.857175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.857186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.857554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.857563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.857938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.857947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.858157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.858165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.858457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.858467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.858693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.858702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.858997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.859005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.859355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.859363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.859721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.859729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.859946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.859953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.860124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.860132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.860419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.860427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.860627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.860635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.860924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.860934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.861292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.861301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.861531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.861539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.861722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.861730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.862026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.862034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.862321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.862328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.862658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.862666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.863036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.863045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.863408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.863416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.373 [2024-10-11 12:03:09.863619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.373 [2024-10-11 12:03:09.863626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.373 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.863991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.863999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.864337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.864345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.864560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.864568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.864901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.864908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.865083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.865090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.865445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.865456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.865789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.865798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.866138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.866147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.866537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.866547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.866894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.866903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.867238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.867245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.867573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.867583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.867629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.867638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.867994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.868003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.868315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.868322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.868675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.868684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.868931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.868939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.869267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.869275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.869658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.869675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.869987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.869999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.870331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.870339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.870647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.870656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.870964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.870973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.871322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.871330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.871658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.871674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.872062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.872072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.872383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.872391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.872624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.872631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.872721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.872728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.872973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.872981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.873303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.873310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.873475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.873482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.873548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.873555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.873929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.873938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.374 [2024-10-11 12:03:09.874312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.374 [2024-10-11 12:03:09.874322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.374 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.874643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.874651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.874984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.874993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.875312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.875322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.875545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.875553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.875756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.875763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.876060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.876068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.876415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.876422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.876788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.876798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.877159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.877167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.877485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.877493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.877787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.877796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.878102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.878112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.878189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.878199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.878501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.878509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.878795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.878807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.879140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.879149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.879469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.879477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.879774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.879782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.880106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.880114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.880441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.880448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.880732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.880754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.881067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.881078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.881273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.881281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.881514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.881522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.881837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.881845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.882052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.882060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.882235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.882244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.882426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.882437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.882768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.882776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.883107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.883115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.883427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.883437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.883644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.883651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.883980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.883989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.884179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.884188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.884550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.884558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.884845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.884853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.885090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.885098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.885479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.885486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.885793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.885807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.886089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.375 [2024-10-11 12:03:09.886098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.375 qpair failed and we were unable to recover it. 00:29:25.375 [2024-10-11 12:03:09.886420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.886428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.886763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.886774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.887129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.887138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.887464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.887472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.887678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.887689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.887920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.887929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.888106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.888114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.888469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.888477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.888788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.888797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.889108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.889115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.889292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.889300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.889577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.889585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.889955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.889966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.890168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.890177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.890346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.890354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.890692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.890701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.890889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.890897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.891070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.891077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.891373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.891383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.891611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.891620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.891819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.891827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.892014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.892023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.892227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.892234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.892602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.892612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.892917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.892926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.893319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.893326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.893518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.893526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.893725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.893734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.894051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.894058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.894385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.894393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.894688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.894699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.895052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.895061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.895367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.895375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.895696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.895704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.895886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.895896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.896277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.896286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.896621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.896630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.896833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.896843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.897131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.897140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.376 [2024-10-11 12:03:09.897342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.376 [2024-10-11 12:03:09.897350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.376 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.897750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.897759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.898123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.898132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.898177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.898185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.898503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.898513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.898735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.898746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.899057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.899069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.899377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.899384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.899699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.899707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.900054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.900063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.900406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.900416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.900742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.900752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.900936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.900945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.901291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.901307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.901614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.901623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.901992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.902002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.902338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.902346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.902681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.902689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.902888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.902896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.903205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.903214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.903534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.903544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.903765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.903775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.904051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.904061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.904390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.904398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.904685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.904694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.904907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.904916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.905273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.905281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.905601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.905618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.905974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.905983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.906201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.906212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.906504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.906514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.906836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.906845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.907197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.907205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.907574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.907582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.907891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.907903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.908091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.908100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.908336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.908345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.908626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.908634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.908814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.908824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.377 qpair failed and we were unable to recover it. 00:29:25.377 [2024-10-11 12:03:09.909181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.377 [2024-10-11 12:03:09.909190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.909395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.909403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.909625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.909634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.909992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.910001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.910340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.910348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.910681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.910691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.911099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.911109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.911445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.911454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.911694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.911706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.912060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.912070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.912367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.912375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.912611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.912621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.913035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.913045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.913223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.913231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.913555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.913563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.913935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.913948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.914171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.914181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.914532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.914539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.914847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.914858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.915071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.915082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.915267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.915277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.915643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.915653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.915853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.915864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.916214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.916222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.916529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.916538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.916774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.916785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.916954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.916965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.917167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.917176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.917473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.917483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.917823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.917833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.918173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.918181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.918502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.918512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.918846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.918855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.919163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.919173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.378 [2024-10-11 12:03:09.919371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.378 [2024-10-11 12:03:09.919380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.378 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.919686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.919696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.920028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.920041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.920372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.920382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.920743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.920754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.921075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.921083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.921312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.921321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.921597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.921605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.921967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.921978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.922182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.922193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.922373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.922385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.922487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.922495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.922829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.922839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.923211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.923220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.923561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.923570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.923775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.923783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.923998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.924008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.924356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.924367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.924718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.924727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.925050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.925063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.925411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.925420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.925750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.925760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.925949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.925960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.926297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.926306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.926659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.926678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.926848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.926858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.927146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.927155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.927498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.927507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.927823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.927832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.928197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.928208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.928533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.928544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.928857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.928866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.929241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.929252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.929539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.929547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.929843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.929852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.930049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.930058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.930419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.930428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.930680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.930689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.930881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.930889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.379 [2024-10-11 12:03:09.931190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.379 [2024-10-11 12:03:09.931198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.379 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.931560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.931571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.931903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.931915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.932140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.932148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.932423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.932430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.932624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.932633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.932875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.932884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.933094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.933104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.933294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.933302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.933654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.933665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.934002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.934014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.934341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.934351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.934679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.934689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.935107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.935116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.935428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.935437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.935637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.935648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.935821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.935829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.936016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.936026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.936218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.936226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.936429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.936440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.936643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.936652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.937057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.937066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.937409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.937418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.937747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.937757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.937974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.937983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.938281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.938294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.938619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.938628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.939043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.939054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.939231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.939240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.939438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.939447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.939733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.939742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.940041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.940051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.940360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.940368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.940727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.940739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.941053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.941063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.941235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.941243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.941602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.941612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.941794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.941809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.942113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.942124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.942471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.380 [2024-10-11 12:03:09.942482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.380 qpair failed and we were unable to recover it. 00:29:25.380 [2024-10-11 12:03:09.942709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.942719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.943022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.943032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.943363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.943371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.943685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.943694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.944043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.944051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.944387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.944396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.944586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.944597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.944850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.944859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.945224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.945232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.945432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.945439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.945810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.945818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.946146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.946154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.946460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.946468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.946687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.946696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.947095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.947103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.947351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.947362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.947446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.947454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.947749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.947759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.948092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.948102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.948269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.948279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.948633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.948644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.949009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.949018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.949380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.949389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.949691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.949700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.949906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.949916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.950203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.950211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.950410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.950418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.950758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.950767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.951115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.951122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.951461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.951471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.951657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.951665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.952036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.952045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.952384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.952391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.952685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.952694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.952872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.952879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.953168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.953176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.953457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.953468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.953703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.953711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.954031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.954040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.381 [2024-10-11 12:03:09.954374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.381 [2024-10-11 12:03:09.954383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.381 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.954738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.954747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.955058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.955065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.955401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.955411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.955725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.955733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.955953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.955961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.956160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.956170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.956370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.956380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.956742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.956751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.957097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.957105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.957278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.957290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.957635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.957642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.957963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.957972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.958203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.958211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.958564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.958574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.958762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.958772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.959098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.959107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.959468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.959476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.959802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.959810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.960172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.960180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.960516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.960524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.960843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.960853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.961237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.961245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.961640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.961650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.961967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.961976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.962150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.962159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.962489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.962497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.962789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.962797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.962994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.963003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.963315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.963323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.963611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.963619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.963926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.963935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.964293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.964302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.964629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.964638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.964819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.964828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.965029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.965036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.965280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.965289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.965619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.965628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.965957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.382 [2024-10-11 12:03:09.965966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.382 qpair failed and we were unable to recover it. 00:29:25.382 [2024-10-11 12:03:09.966315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.966324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.966567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.966577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.966880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.966889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.967202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.967210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.967509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.967518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.967870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.967878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.968213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.968223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.968424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.968447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.968754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.968764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.969095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.969103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.969435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.969444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.969771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.969780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.970091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.970099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.970275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.970283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.970620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.970631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.970867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.970875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.971051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.971060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.971241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.971252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.971443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.971451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.971755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.971763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.972089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.972098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.972425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.972433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.383 [2024-10-11 12:03:09.972753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.383 [2024-10-11 12:03:09.972762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.383 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.973112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.973125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.973471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.973482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.973709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.973718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.973808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.973816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.974129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.974137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.974468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.974476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.974832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.974841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.975201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.975209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.975393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.975402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.975745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.975755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.976047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.976056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.976270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.976279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.976610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.976620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.976885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.976893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.654 [2024-10-11 12:03:09.977068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.654 [2024-10-11 12:03:09.977078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.654 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.977423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.977432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.977642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.977650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.978001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.978010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.978341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.978351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.978400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.978409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.978749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.978760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.979071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.979080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.979401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.979409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.979746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.979754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.980123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.980132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.980448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.980457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.980823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.980831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.981086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.981094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.981430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.981438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.981770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.981779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.982093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.982104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.982439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.982448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.982620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.982627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.982954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.982962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.983295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.983302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.983631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.983638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.983813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.983821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.984129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.984136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.984338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.984347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.984569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.984577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.984910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.984920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.985099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.985111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.985445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.985453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.985734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.985743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.985934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.985943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.986342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.986352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.986707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.986718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.987076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.987086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.987261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.987269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.987464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.987472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.987798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.987807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.988215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.988223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.988535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.988543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.988819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.988827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.989048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.989059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.655 qpair failed and we were unable to recover it. 00:29:25.655 [2024-10-11 12:03:09.989328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.655 [2024-10-11 12:03:09.989337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.989512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.989521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.989750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.989759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:25.656 [2024-10-11 12:03:09.990075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.990088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:25.656 [2024-10-11 12:03:09.990264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.990277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:25.656 [2024-10-11 12:03:09.990697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.990708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:25.656 [2024-10-11 12:03:09.991017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.991027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 12:03:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.656 [2024-10-11 12:03:09.991283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.991295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.991604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.991614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.991943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.991953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.992181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.992191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.992514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.992525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.992843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.992853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.993186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.993194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.993538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.993547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.993730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.993751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.994052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.994061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.994420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.994430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.994739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.994748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.995094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.995103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.995281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.995291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.995608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.995622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.995961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.995971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.996325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.996334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.996639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.996649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.996992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.997000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.997312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.997320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.997646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.997655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.997976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.997985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.998315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.998324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.998645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.998658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.998995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.999006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.999192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.999202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.999560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.999572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:09.999884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:09.999895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:10.000101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:10.000112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:10.000415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:10.000426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:10.000618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:10.000628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:10.000715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.656 [2024-10-11 12:03:10.000723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.656 qpair failed and we were unable to recover it. 00:29:25.656 [2024-10-11 12:03:10.000897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.000906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.001250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.001261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.001614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.001624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.001826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.001835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.002094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.002104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.002816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.002833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.003199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.003208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.003433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.003445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.003700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.003710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.003899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.003909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.004258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.004268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.004479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.004488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.004719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.004729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.004902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.004913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.005100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.005110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.005456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.005468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.005677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.005690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.005751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.005765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.005965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.005976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.006202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.006211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.006522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.006532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.006648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.006657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.006919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.006927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.007101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.007109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.007408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.007418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.007752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.007761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.007948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.007957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.008324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.008336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.008703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.008713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.008909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.008919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.009308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.009318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.009522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.009532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.009739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.009747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.010087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.010097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.010337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.010346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.010507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.010517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.010861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.010870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.011105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.011113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.011456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.011468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.011658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.011678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.011923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.657 [2024-10-11 12:03:10.011931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.657 qpair failed and we were unable to recover it. 00:29:25.657 [2024-10-11 12:03:10.012188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.012198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.012614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.012625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.012726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.012733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.012838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.012852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.012977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.012994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.013091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.013101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.013288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.013297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.013515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.013523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.013767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.013778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.014097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.014107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.014329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.014338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.014643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.014651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.014896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.014905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.015254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.015262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.015604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.015616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.016007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.016018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.016205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.016213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.016586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.016595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.016853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.016863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.017201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.017208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.017536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.017547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.017775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.017783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.018031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.018040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.018404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.018413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.018461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.018469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.018781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.018790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.018984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.018993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.019398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.019407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.019632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.019643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.019887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.019895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.020107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.020118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.020288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.020298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.020608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.020619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.020815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.658 [2024-10-11 12:03:10.020823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.658 qpair failed and we were unable to recover it. 00:29:25.658 [2024-10-11 12:03:10.021036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.021044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.021411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.021420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.021758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.021767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.022023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.022032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.022215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.022223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.022426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.022436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.022638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.022649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.022960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.022969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.023064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.023074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.023371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.023380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.023704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.023713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.024053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.024062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.024282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.024291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.024617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.024628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.024956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.024966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.025295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.025305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.025660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.025677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.025998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.026007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.026179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.026189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.026543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.026555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.026834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.026843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.027040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.027049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.027398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.027406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.027709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.027718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.028070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.028080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.028260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.028277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.028579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.028590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.028805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.028815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.029194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.029202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.029417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.029426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.029738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.029750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.029815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.029823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.030184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.030196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.030536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.030545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.030768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.030780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.030972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.030983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.031337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.031347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.031659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.031674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.031992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.032003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.032309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.032317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.659 [2024-10-11 12:03:10.032516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.659 [2024-10-11 12:03:10.032526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.659 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.032878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.032889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.033173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.033181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.033532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.033543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.660 [2024-10-11 12:03:10.033775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.033787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.034031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.034039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.660 [2024-10-11 12:03:10.034257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.034269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.660 [2024-10-11 12:03:10.034499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.034512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.034737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.034748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.660 [2024-10-11 12:03:10.034822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.034832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.035140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.035150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.035445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.035454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.035782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.035793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.035967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.035974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.036287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.036295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.036640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.036649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.036882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.036890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.037297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.037305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.037795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.037806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.038084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.038094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.038313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.038323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.038631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.038640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.038872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.038885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.039166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.039176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.039265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.039275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.039431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.039440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.039567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.039575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.039691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.039699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.039788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.039797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.039981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.039990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.040221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.040230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.040326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.040338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.040457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.040465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.040713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.040722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.040912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.040920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.041118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.041127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.041387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.041396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.041629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.041637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.041914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.041923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.660 qpair failed and we were unable to recover it. 00:29:25.660 [2024-10-11 12:03:10.042050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.660 [2024-10-11 12:03:10.042059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.042397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.042408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.042793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.042804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.043013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.043022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.043244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.043254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.043399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.043410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.043717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.043726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.044056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.044066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.044393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.044401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.044746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.044757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.045061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.045072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.045449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.045457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.045701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.045709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.045956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.045965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.046268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.046276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.046497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.046504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.046847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.046856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.047206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.047215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.047580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.047588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.047912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.047923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.048114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.048123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.048351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.048361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.048687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.048696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.049029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.049038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.049398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.049409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.049804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.049815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.050171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.050179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.050377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.050385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.050578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.050588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.050954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.050963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.051158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.051166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.051459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.051468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.051800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.051809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.052015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.052022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.052234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.052243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.052537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.052544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.052846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.052856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.053027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.053036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.053377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.053387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.053568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.053578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.053919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.661 [2024-10-11 12:03:10.053928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.661 qpair failed and we were unable to recover it. 00:29:25.661 [2024-10-11 12:03:10.054234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.054244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.054598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.054606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.054825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.054833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.055181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.055189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.055502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.055510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.055820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.055829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.056037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.056046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.056359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.056369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.056429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.056436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.056780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.056787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.057113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.057125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.057336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.057345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.057553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.057561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.057900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.057910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.058248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.058259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.058562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.058578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.058785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.058793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.059103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.059112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.059466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.059474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.059775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.059783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.060110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.060118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.060414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.060422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.060585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.060597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.060945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.060954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.061168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.061177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.061501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.061509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.061848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.061857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.062204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.062213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.062416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.062426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.062613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.062625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.062821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.062831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.063121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.063131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.063299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.063308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.063682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.063693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.064057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.064065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.662 [2024-10-11 12:03:10.064393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.662 [2024-10-11 12:03:10.064402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.662 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.064748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.064759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.065098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.065110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.065283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.065292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.065676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.065687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.066046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.066056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.066387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.066397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.066637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.066645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.066890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.066902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.067152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.067161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.067338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.067349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.067691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.067701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.067762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.067769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.068084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.068091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.068287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.068298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.068473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.068480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.068808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.068819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.069191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.069200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.069508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.069518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.069694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.069703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.070021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.070033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.070216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.070225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.070524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.070534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.070845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.070855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.071076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.071084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.071129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.071137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.071444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.071455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.071791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.071802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.072144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.072160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.072269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.072291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.072596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.072606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.072795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.072806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.073115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.073124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.073464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.073472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.073658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.073666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.663 [2024-10-11 12:03:10.073964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.663 [2024-10-11 12:03:10.073974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.663 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.074307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.074316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.074632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.074640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.074862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.074869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 Malloc0 00:29:25.664 [2024-10-11 12:03:10.075042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.075049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.075326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.075334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.075677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.075687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.664 [2024-10-11 12:03:10.076025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.076039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:25.664 [2024-10-11 12:03:10.076359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.076369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.664 [2024-10-11 12:03:10.076706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.076718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.664 [2024-10-11 12:03:10.077039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.077049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.077376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.077385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.077576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.077587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.077791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.077801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.078018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.078026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.078378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.078387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.078537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.078548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.078814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.078824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.079083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.079092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.079301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.079309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.079727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.079736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.080052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.080061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.080387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.080396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.080602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.080612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.080937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.080947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.081326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.081337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.081609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.081617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.082001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.082012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.082200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.082208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.082318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.664 [2024-10-11 12:03:10.082451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.082460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.082714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.082724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.082941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.082949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.083164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.083174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.083372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.083380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.664 [2024-10-11 12:03:10.083801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.664 [2024-10-11 12:03:10.083810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.664 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.083887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.083894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.084079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.084086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.084274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.084282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.084586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.084594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.084910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.084920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.085197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.085205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.085527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.085537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.085849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.085858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.086069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.086077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.086267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.086275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.086580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.086591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.086897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.086909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.087221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.087230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.087396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.087408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.087758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.087768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.087966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.087976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.088068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.088076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.088331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.088339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.088680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.088689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.088893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.088901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.089080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.089086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.089340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.089349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.089516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.089526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.089857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.089866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.090211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.090219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.090406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.090423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.090722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.090732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.090960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.090968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.091298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.091308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.091404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.091414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.665 [2024-10-11 12:03:10.091648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.091660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.091872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.091883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.665 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.665 [2024-10-11 12:03:10.092323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.092333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.665 [2024-10-11 12:03:10.092523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.092535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.092801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.092810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.093003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.093011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.093319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.093329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.093676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.093686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.665 [2024-10-11 12:03:10.093981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.665 [2024-10-11 12:03:10.093989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.665 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.094309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.094318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.094527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.094535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.094772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.094781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.095042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.095050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.095398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.095405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.095610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.095618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.096031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.096039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.096361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.096370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.096644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.096653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.097003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.097011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.097183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.097192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.097530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.097541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.097879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.097887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.098095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.098102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.098413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.098422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.098745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.098755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.099110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.099118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.099447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.099455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.099810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.099818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.100171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.100179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.100507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.100517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.100845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.100853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.101245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.101253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.101424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.101432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.101639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.101648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.101896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.101905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.102263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.102272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.102588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.102597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.102771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.102781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.103029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.103039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 [2024-10-11 12:03:10.103242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.103253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.666 [2024-10-11 12:03:10.103456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.103468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.666 [2024-10-11 12:03:10.103829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.103840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.666 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.666 [2024-10-11 12:03:10.104148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.666 [2024-10-11 12:03:10.104161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.666 qpair failed and we were unable to recover it. 00:29:25.667 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.667 [2024-10-11 12:03:10.104522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.104533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.104779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.104788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.105147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.105158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.105399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.105409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.105755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.105765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.105822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.105830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.106034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.106043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.106220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.106228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.106456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.106465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.106696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.106706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.107045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.107055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.107389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.107400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.107745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.107756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.108109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.108119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.108424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.108434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.108598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.108608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.108974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.108985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.109346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.109355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.109682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.109693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.109999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.110009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.110338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.110347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.110710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.110720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.111070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.111080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.111407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.111416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.111744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.111754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.112094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.112106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.112322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.112331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.112666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.112683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.113006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.113015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.113199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.113209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.113416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.113425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.113733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.113744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.113967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.113976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.114290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.114299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.114488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.114499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.114716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.114726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.667 qpair failed and we were unable to recover it. 00:29:25.667 [2024-10-11 12:03:10.114788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.667 [2024-10-11 12:03:10.114797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.115087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.115096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.115436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.115448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.668 [2024-10-11 12:03:10.115683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.115692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.668 [2024-10-11 12:03:10.116038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.116048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.668 [2024-10-11 12:03:10.116294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.116305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.668 [2024-10-11 12:03:10.116544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.116554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.116747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.116757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.117108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.117118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.117433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.117442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.117686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.117695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.117923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.117933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.117997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.118005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.118340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.118349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.118636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.118647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.118855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.118866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.119191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.119200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.119401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.119411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.119466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.119475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.119598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.119606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.119888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.119898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.120094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.120102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.120310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.120319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.120632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.120642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.120826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.120835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.121148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.121157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.121617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.121629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.121950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.121960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.122305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.122314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.122654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.668 [2024-10-11 12:03:10.122663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102dbd0 with addr=10.0.0.2, port=4420 00:29:25.668 qpair failed and we were unable to recover it. 00:29:25.668 [2024-10-11 12:03:10.122702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.668 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.668 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.668 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.668 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.668 [2024-10-11 12:03:10.133619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.668 [2024-10-11 12:03:10.133722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.668 [2024-10-11 12:03:10.133748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.133755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.133760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.133783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.669 12:03:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1203290 00:29:25.669 [2024-10-11 12:03:10.143486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.143575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.143593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.143599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.143603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.143622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.153500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.153573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.153589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.153595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.153600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.153614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.163440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.163516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.163535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.163542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.163546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.163561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.173430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.173505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.173528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.173534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.173539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.173553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.183431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.183528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.183544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.183549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.183554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.183568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.193439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.193509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.193525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.193530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.193535] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.193549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.203499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.203570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.203586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.203592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.203596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.203611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.213566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.213632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.213650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.213656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.213666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.213691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.223567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.223632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.223648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.223654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.223659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.223681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.233659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.233723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.233739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.233744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.233749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.233764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.243615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.243689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.243702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.243708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.243714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.243727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.253653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.253725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.253740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.253745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.253750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.253763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.263696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.263758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.669 [2024-10-11 12:03:10.263784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.669 [2024-10-11 12:03:10.263790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.669 [2024-10-11 12:03:10.263796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.669 [2024-10-11 12:03:10.263812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.669 qpair failed and we were unable to recover it. 00:29:25.669 [2024-10-11 12:03:10.273720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.669 [2024-10-11 12:03:10.273788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.670 [2024-10-11 12:03:10.273825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.670 [2024-10-11 12:03:10.273831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.670 [2024-10-11 12:03:10.273835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.670 [2024-10-11 12:03:10.273850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.670 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.283753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.283824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.283839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.283845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.283850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.283864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.293836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.293931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.293946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.293952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.293957] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.293970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.303785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.303843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.303857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.303863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.303879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.303893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.313791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.313853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.313870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.313876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.313880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.313895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.324008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.324091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.324107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.324112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.324117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.324131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.333967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.334076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.334092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.334098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.334103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.334116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.343817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.343879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.343893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.343898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.343903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.343916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.354023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.354094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.354109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.354114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.354119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.354132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.363989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.364051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.364067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.364073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.364078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.364092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.374015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.374082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.374097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.374103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.374107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.374121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.384040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.384095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.384111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.384116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.384121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.384135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.394074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.394133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.394146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.394151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.394161] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.394174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.403998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.404062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.404076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.933 [2024-10-11 12:03:10.404081] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.933 [2024-10-11 12:03:10.404086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.933 [2024-10-11 12:03:10.404099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.933 qpair failed and we were unable to recover it. 00:29:25.933 [2024-10-11 12:03:10.414152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.933 [2024-10-11 12:03:10.414214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.933 [2024-10-11 12:03:10.414231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.414236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.414240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.414254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.424158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.424220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.424235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.424240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.424245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.424258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.434179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.434239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.434253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.434258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.434263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.434276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.444231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.444306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.444321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.444330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.444335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.444349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.454304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.454411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.454426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.454431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.454436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.454450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.464275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.464357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.464373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.464379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.464383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.464397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.474352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.474409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.474427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.474433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.474438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.474452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.484270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.484339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.484354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.484360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.484371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.484384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.494419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.494512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.494527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.494532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.494537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.494550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.504292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.504379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.504398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.504404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.504409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.504424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.514463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.514542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.514561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.514567] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.514571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.514587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.524456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.524519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.524535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.524541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.524545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.524560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.534487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.534569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.534583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.534589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.534593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.534606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.544461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.544523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.544537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.544543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.934 [2024-10-11 12:03:10.544547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.934 [2024-10-11 12:03:10.544561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.934 qpair failed and we were unable to recover it. 00:29:25.934 [2024-10-11 12:03:10.554393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:25.934 [2024-10-11 12:03:10.554450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:25.934 [2024-10-11 12:03:10.554466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:25.934 [2024-10-11 12:03:10.554472] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.935 [2024-10-11 12:03:10.554476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:25.935 [2024-10-11 12:03:10.554490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:25.935 qpair failed and we were unable to recover it. 00:29:26.198 [2024-10-11 12:03:10.564550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.198 [2024-10-11 12:03:10.564620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.198 [2024-10-11 12:03:10.564636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.198 [2024-10-11 12:03:10.564642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.198 [2024-10-11 12:03:10.564647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.198 [2024-10-11 12:03:10.564661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.198 qpair failed and we were unable to recover it. 00:29:26.198 [2024-10-11 12:03:10.574607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.198 [2024-10-11 12:03:10.574687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.198 [2024-10-11 12:03:10.574703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.198 [2024-10-11 12:03:10.574714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.198 [2024-10-11 12:03:10.574719] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.198 [2024-10-11 12:03:10.574732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.198 qpair failed and we were unable to recover it. 00:29:26.198 [2024-10-11 12:03:10.584604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.198 [2024-10-11 12:03:10.584690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.198 [2024-10-11 12:03:10.584707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.198 [2024-10-11 12:03:10.584713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.198 [2024-10-11 12:03:10.584717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.198 [2024-10-11 12:03:10.584731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.198 qpair failed and we were unable to recover it. 00:29:26.198 [2024-10-11 12:03:10.594638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.198 [2024-10-11 12:03:10.594699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.198 [2024-10-11 12:03:10.594713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.198 [2024-10-11 12:03:10.594719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.594724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.594737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.604725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.604801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.604814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.604820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.604825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.604838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.614729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.614802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.614825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.614832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.614837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.614854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.624725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.624792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.624808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.624814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.624818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.624832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.634611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.634683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.634698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.634704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.634709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.634723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.644766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.644833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.644846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.644852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.644857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.644871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.654837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.654935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.654950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.654955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.654960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.654974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.664832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.664886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.664902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.664912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.664917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.664930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.674871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.674963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.674983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.674988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.674993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.675007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.684922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.684983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.684997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.685003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.685007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.685021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.694983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.695056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.695071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.695076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.695080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.695094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.704858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.704913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.704928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.704934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.704938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.704952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.715041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.715107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.715123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.715128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.715133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.715146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.725044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.725107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.725122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.199 [2024-10-11 12:03:10.725127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.199 [2024-10-11 12:03:10.725131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.199 [2024-10-11 12:03:10.725145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.199 qpair failed and we were unable to recover it. 00:29:26.199 [2024-10-11 12:03:10.735121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.199 [2024-10-11 12:03:10.735193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.199 [2024-10-11 12:03:10.735208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.735213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.735217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.735231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.200 [2024-10-11 12:03:10.745103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.200 [2024-10-11 12:03:10.745165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.200 [2024-10-11 12:03:10.745180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.745185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.745190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.745203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.200 [2024-10-11 12:03:10.755135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.200 [2024-10-11 12:03:10.755194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.200 [2024-10-11 12:03:10.755208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.755218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.755223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.755236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.200 [2024-10-11 12:03:10.765171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.200 [2024-10-11 12:03:10.765273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.200 [2024-10-11 12:03:10.765289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.765295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.765300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.765313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.200 [2024-10-11 12:03:10.775092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.200 [2024-10-11 12:03:10.775159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.200 [2024-10-11 12:03:10.775174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.775179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.775184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.775197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.200 [2024-10-11 12:03:10.785235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.200 [2024-10-11 12:03:10.785287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.200 [2024-10-11 12:03:10.785301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.785306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.785310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.785323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.200 [2024-10-11 12:03:10.795245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.200 [2024-10-11 12:03:10.795304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.200 [2024-10-11 12:03:10.795318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.795323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.795328] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.795341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.200 [2024-10-11 12:03:10.805265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.200 [2024-10-11 12:03:10.805342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.200 [2024-10-11 12:03:10.805357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.805362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.805366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.805379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.200 [2024-10-11 12:03:10.815312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.200 [2024-10-11 12:03:10.815383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.200 [2024-10-11 12:03:10.815403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.815410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.815415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.815431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.200 [2024-10-11 12:03:10.825203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.200 [2024-10-11 12:03:10.825267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.200 [2024-10-11 12:03:10.825283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.200 [2024-10-11 12:03:10.825289] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.200 [2024-10-11 12:03:10.825293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.200 [2024-10-11 12:03:10.825307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.200 qpair failed and we were unable to recover it. 00:29:26.463 [2024-10-11 12:03:10.835370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.463 [2024-10-11 12:03:10.835425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.463 [2024-10-11 12:03:10.835442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.463 [2024-10-11 12:03:10.835448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.463 [2024-10-11 12:03:10.835452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.463 [2024-10-11 12:03:10.835465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.463 qpair failed and we were unable to recover it. 00:29:26.463 [2024-10-11 12:03:10.845382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.463 [2024-10-11 12:03:10.845459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.463 [2024-10-11 12:03:10.845495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.463 [2024-10-11 12:03:10.845510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.463 [2024-10-11 12:03:10.845517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.463 [2024-10-11 12:03:10.845537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.463 qpair failed and we were unable to recover it. 00:29:26.463 [2024-10-11 12:03:10.855449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.463 [2024-10-11 12:03:10.855515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.463 [2024-10-11 12:03:10.855533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.463 [2024-10-11 12:03:10.855539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.463 [2024-10-11 12:03:10.855545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.463 [2024-10-11 12:03:10.855561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.463 qpair failed and we were unable to recover it. 00:29:26.463 [2024-10-11 12:03:10.865401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.463 [2024-10-11 12:03:10.865465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.463 [2024-10-11 12:03:10.865482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.463 [2024-10-11 12:03:10.865489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.463 [2024-10-11 12:03:10.865495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.463 [2024-10-11 12:03:10.865511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.463 qpair failed and we were unable to recover it. 00:29:26.463 [2024-10-11 12:03:10.875448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.463 [2024-10-11 12:03:10.875507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.463 [2024-10-11 12:03:10.875523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.463 [2024-10-11 12:03:10.875529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.463 [2024-10-11 12:03:10.875534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.463 [2024-10-11 12:03:10.875549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.463 qpair failed and we were unable to recover it. 00:29:26.463 [2024-10-11 12:03:10.885504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.463 [2024-10-11 12:03:10.885568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.463 [2024-10-11 12:03:10.885585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.463 [2024-10-11 12:03:10.885591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.463 [2024-10-11 12:03:10.885596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.463 [2024-10-11 12:03:10.885611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.895561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.895631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.895646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.895652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.895656] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.895676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.905534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.905602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.905617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.905622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.905627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.905640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.915581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.915642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.915660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.915665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.915676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.915691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.925496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.925563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.925582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.925587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.925592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.925606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.935641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.935708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.935731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.935737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.935741] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.935755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.945675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.945740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.945754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.945759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.945763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.945777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.955702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.955759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.955774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.955780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.955784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.955799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.965748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.965810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.965827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.965833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.965837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.965851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.975786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.975851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.975866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.975872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.975876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.975890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.985773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.985829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.985845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.985850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.985854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.985867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:10.995850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:10.995916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:10.995930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:10.995935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:10.995939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:10.995953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:11.005850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:11.005913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:11.005928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:11.005933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:11.005938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:11.005951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:11.015928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:11.016004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:11.016019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.464 [2024-10-11 12:03:11.016025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.464 [2024-10-11 12:03:11.016029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.464 [2024-10-11 12:03:11.016043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.464 qpair failed and we were unable to recover it. 00:29:26.464 [2024-10-11 12:03:11.025941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.464 [2024-10-11 12:03:11.026005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.464 [2024-10-11 12:03:11.026031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.465 [2024-10-11 12:03:11.026037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.465 [2024-10-11 12:03:11.026044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.465 [2024-10-11 12:03:11.026059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.465 qpair failed and we were unable to recover it. 00:29:26.465 [2024-10-11 12:03:11.035960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.465 [2024-10-11 12:03:11.036018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.465 [2024-10-11 12:03:11.036034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.465 [2024-10-11 12:03:11.036040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.465 [2024-10-11 12:03:11.036044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.465 [2024-10-11 12:03:11.036059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.465 qpair failed and we were unable to recover it. 00:29:26.465 [2024-10-11 12:03:11.046024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.465 [2024-10-11 12:03:11.046087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.465 [2024-10-11 12:03:11.046102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.465 [2024-10-11 12:03:11.046107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.465 [2024-10-11 12:03:11.046112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.465 [2024-10-11 12:03:11.046126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.465 qpair failed and we were unable to recover it. 00:29:26.465 [2024-10-11 12:03:11.056071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.465 [2024-10-11 12:03:11.056130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.465 [2024-10-11 12:03:11.056146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.465 [2024-10-11 12:03:11.056151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.465 [2024-10-11 12:03:11.056156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.465 [2024-10-11 12:03:11.056171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.465 qpair failed and we were unable to recover it. 00:29:26.465 [2024-10-11 12:03:11.066054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.465 [2024-10-11 12:03:11.066121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.465 [2024-10-11 12:03:11.066139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.465 [2024-10-11 12:03:11.066145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.465 [2024-10-11 12:03:11.066149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.465 [2024-10-11 12:03:11.066167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.465 qpair failed and we were unable to recover it. 00:29:26.465 [2024-10-11 12:03:11.076091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.465 [2024-10-11 12:03:11.076146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.465 [2024-10-11 12:03:11.076163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.465 [2024-10-11 12:03:11.076168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.465 [2024-10-11 12:03:11.076172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.465 [2024-10-11 12:03:11.076187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.465 qpair failed and we were unable to recover it. 00:29:26.465 [2024-10-11 12:03:11.086116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.465 [2024-10-11 12:03:11.086177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.465 [2024-10-11 12:03:11.086191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.465 [2024-10-11 12:03:11.086196] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.465 [2024-10-11 12:03:11.086200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.465 [2024-10-11 12:03:11.086214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.465 qpair failed and we were unable to recover it. 00:29:26.728 [2024-10-11 12:03:11.096149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.728 [2024-10-11 12:03:11.096216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.728 [2024-10-11 12:03:11.096238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.728 [2024-10-11 12:03:11.096243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.728 [2024-10-11 12:03:11.096248] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.728 [2024-10-11 12:03:11.096263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.728 qpair failed and we were unable to recover it. 00:29:26.728 [2024-10-11 12:03:11.106184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.728 [2024-10-11 12:03:11.106237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.728 [2024-10-11 12:03:11.106252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.728 [2024-10-11 12:03:11.106258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.728 [2024-10-11 12:03:11.106263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.728 [2024-10-11 12:03:11.106277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.728 qpair failed and we were unable to recover it. 00:29:26.728 [2024-10-11 12:03:11.116216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.728 [2024-10-11 12:03:11.116278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.728 [2024-10-11 12:03:11.116305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.728 [2024-10-11 12:03:11.116310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.728 [2024-10-11 12:03:11.116315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.728 [2024-10-11 12:03:11.116329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.728 qpair failed and we were unable to recover it. 00:29:26.728 [2024-10-11 12:03:11.126236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.728 [2024-10-11 12:03:11.126314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.728 [2024-10-11 12:03:11.126329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.728 [2024-10-11 12:03:11.126334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.728 [2024-10-11 12:03:11.126339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.728 [2024-10-11 12:03:11.126353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.728 qpair failed and we were unable to recover it. 00:29:26.728 [2024-10-11 12:03:11.136276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.728 [2024-10-11 12:03:11.136347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.728 [2024-10-11 12:03:11.136361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.728 [2024-10-11 12:03:11.136367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.728 [2024-10-11 12:03:11.136372] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.728 [2024-10-11 12:03:11.136385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.728 qpair failed and we were unable to recover it. 00:29:26.728 [2024-10-11 12:03:11.146276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.728 [2024-10-11 12:03:11.146340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.728 [2024-10-11 12:03:11.146355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.728 [2024-10-11 12:03:11.146360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.728 [2024-10-11 12:03:11.146365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.728 [2024-10-11 12:03:11.146378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.728 qpair failed and we were unable to recover it. 00:29:26.728 [2024-10-11 12:03:11.156288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.728 [2024-10-11 12:03:11.156340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.728 [2024-10-11 12:03:11.156356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.728 [2024-10-11 12:03:11.156362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.728 [2024-10-11 12:03:11.156366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.156386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.166351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.166425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.166440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.166445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.166449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.166462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.176412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.176474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.176488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.176494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.176498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.176511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.186273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.186330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.186344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.186349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.186353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.186367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.196446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.196498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.196512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.196517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.196522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.196534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.206382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.206451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.206490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.206497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.206502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.206523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.216523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.216599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.216618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.216623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.216628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.216643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.226429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.226516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.226532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.226537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.226541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.226556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.236580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.236634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.236651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.236656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.236661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.236681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.246591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.246661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.246680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.246686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.246690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.246710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.256644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.256723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.256743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.256749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.256754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.256771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.266700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.266795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.266811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.266816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.266820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.266835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.276705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.276763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.276778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.276783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.276788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.276801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.286727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.286789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.286804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.286809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.286814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.286827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.729 [2024-10-11 12:03:11.296651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.729 [2024-10-11 12:03:11.296724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.729 [2024-10-11 12:03:11.296745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.729 [2024-10-11 12:03:11.296751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.729 [2024-10-11 12:03:11.296755] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.729 [2024-10-11 12:03:11.296769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.729 qpair failed and we were unable to recover it. 00:29:26.730 [2024-10-11 12:03:11.306759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.730 [2024-10-11 12:03:11.306819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.730 [2024-10-11 12:03:11.306838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.730 [2024-10-11 12:03:11.306843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.730 [2024-10-11 12:03:11.306848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.730 [2024-10-11 12:03:11.306862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.730 qpair failed and we were unable to recover it. 00:29:26.730 [2024-10-11 12:03:11.316805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.730 [2024-10-11 12:03:11.316858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.730 [2024-10-11 12:03:11.316873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.730 [2024-10-11 12:03:11.316879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.730 [2024-10-11 12:03:11.316883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.730 [2024-10-11 12:03:11.316896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.730 qpair failed and we were unable to recover it. 00:29:26.730 [2024-10-11 12:03:11.326829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.730 [2024-10-11 12:03:11.326895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.730 [2024-10-11 12:03:11.326910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.730 [2024-10-11 12:03:11.326916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.730 [2024-10-11 12:03:11.326920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.730 [2024-10-11 12:03:11.326934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.730 qpair failed and we were unable to recover it. 00:29:26.730 [2024-10-11 12:03:11.336886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.730 [2024-10-11 12:03:11.336960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.730 [2024-10-11 12:03:11.336974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.730 [2024-10-11 12:03:11.336979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.730 [2024-10-11 12:03:11.336984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.730 [2024-10-11 12:03:11.337002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.730 qpair failed and we were unable to recover it. 00:29:26.730 [2024-10-11 12:03:11.346929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.730 [2024-10-11 12:03:11.346984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.730 [2024-10-11 12:03:11.347001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.730 [2024-10-11 12:03:11.347006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.730 [2024-10-11 12:03:11.347010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.730 [2024-10-11 12:03:11.347024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.730 qpair failed and we were unable to recover it. 00:29:26.730 [2024-10-11 12:03:11.356912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.730 [2024-10-11 12:03:11.356970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.730 [2024-10-11 12:03:11.356988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.730 [2024-10-11 12:03:11.356995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.730 [2024-10-11 12:03:11.357000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.730 [2024-10-11 12:03:11.357014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.730 qpair failed and we were unable to recover it. 00:29:26.993 [2024-10-11 12:03:11.366968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.993 [2024-10-11 12:03:11.367035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.993 [2024-10-11 12:03:11.367050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.993 [2024-10-11 12:03:11.367055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.993 [2024-10-11 12:03:11.367060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.993 [2024-10-11 12:03:11.367073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-10-11 12:03:11.377028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.993 [2024-10-11 12:03:11.377102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.993 [2024-10-11 12:03:11.377116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.993 [2024-10-11 12:03:11.377122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.993 [2024-10-11 12:03:11.377126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.993 [2024-10-11 12:03:11.377140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-10-11 12:03:11.387039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.993 [2024-10-11 12:03:11.387096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.993 [2024-10-11 12:03:11.387115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.993 [2024-10-11 12:03:11.387121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.993 [2024-10-11 12:03:11.387125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.993 [2024-10-11 12:03:11.387138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-10-11 12:03:11.397064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.993 [2024-10-11 12:03:11.397124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.993 [2024-10-11 12:03:11.397138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.993 [2024-10-11 12:03:11.397143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.993 [2024-10-11 12:03:11.397148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.993 [2024-10-11 12:03:11.397161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-10-11 12:03:11.407110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.993 [2024-10-11 12:03:11.407175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.993 [2024-10-11 12:03:11.407192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.993 [2024-10-11 12:03:11.407197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.993 [2024-10-11 12:03:11.407201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.993 [2024-10-11 12:03:11.407216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-10-11 12:03:11.417145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.993 [2024-10-11 12:03:11.417219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.993 [2024-10-11 12:03:11.417234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.993 [2024-10-11 12:03:11.417239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.993 [2024-10-11 12:03:11.417244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.993 [2024-10-11 12:03:11.417257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.993 qpair failed and we were unable to recover it. 00:29:26.993 [2024-10-11 12:03:11.427159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.993 [2024-10-11 12:03:11.427220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.993 [2024-10-11 12:03:11.427234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.993 [2024-10-11 12:03:11.427240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.993 [2024-10-11 12:03:11.427249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.993 [2024-10-11 12:03:11.427263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.437178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.437235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.437251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.437256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.437261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.437275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.447232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.447296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.447311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.447316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.447320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.447334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.457278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.457350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.457366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.457372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.457376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.457389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.467270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.467337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.467352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.467357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.467362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.467375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.477320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.477385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.477426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.477433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.477440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.477460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.487343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.487413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.487447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.487453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.487458] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.487479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.497390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.497464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.497498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.497505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.497510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.497529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.507382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.507438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.507461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.507467] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.507472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.507488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.517434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.517496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.517513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.517518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.517529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.517544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.527476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.527541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.527556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.527562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.527567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.527583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.537519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.537586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.537601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.537607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.537611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.537624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.547507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.547562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.547577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.547583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.547587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.547601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.557558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.557611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.557629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.557634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.557639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.557653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.567576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.994 [2024-10-11 12:03:11.567648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.994 [2024-10-11 12:03:11.567664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.994 [2024-10-11 12:03:11.567679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.994 [2024-10-11 12:03:11.567683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.994 [2024-10-11 12:03:11.567697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.994 qpair failed and we were unable to recover it. 00:29:26.994 [2024-10-11 12:03:11.577646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.995 [2024-10-11 12:03:11.577716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.995 [2024-10-11 12:03:11.577730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.995 [2024-10-11 12:03:11.577735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.995 [2024-10-11 12:03:11.577740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.995 [2024-10-11 12:03:11.577754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.995 qpair failed and we were unable to recover it. 00:29:26.995 [2024-10-11 12:03:11.587605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.995 [2024-10-11 12:03:11.587680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.995 [2024-10-11 12:03:11.587695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.995 [2024-10-11 12:03:11.587700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.995 [2024-10-11 12:03:11.587704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.995 [2024-10-11 12:03:11.587718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.995 qpair failed and we were unable to recover it. 00:29:26.995 [2024-10-11 12:03:11.597682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.995 [2024-10-11 12:03:11.597782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.995 [2024-10-11 12:03:11.597797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.995 [2024-10-11 12:03:11.597802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.995 [2024-10-11 12:03:11.597806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.995 [2024-10-11 12:03:11.597820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.995 qpair failed and we were unable to recover it. 00:29:26.995 [2024-10-11 12:03:11.607750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.995 [2024-10-11 12:03:11.607815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.995 [2024-10-11 12:03:11.607831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.995 [2024-10-11 12:03:11.607836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.995 [2024-10-11 12:03:11.607846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.995 [2024-10-11 12:03:11.607860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.995 qpair failed and we were unable to recover it. 00:29:26.995 [2024-10-11 12:03:11.617746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:26.995 [2024-10-11 12:03:11.617817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:26.995 [2024-10-11 12:03:11.617837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:26.995 [2024-10-11 12:03:11.617843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:26.995 [2024-10-11 12:03:11.617848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:26.995 [2024-10-11 12:03:11.617863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.995 qpair failed and we were unable to recover it. 00:29:27.258 [2024-10-11 12:03:11.627757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.258 [2024-10-11 12:03:11.627843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.258 [2024-10-11 12:03:11.627858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.258 [2024-10-11 12:03:11.627864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.258 [2024-10-11 12:03:11.627869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.258 [2024-10-11 12:03:11.627883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.258 qpair failed and we were unable to recover it. 00:29:27.258 [2024-10-11 12:03:11.637690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.258 [2024-10-11 12:03:11.637747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.258 [2024-10-11 12:03:11.637761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.258 [2024-10-11 12:03:11.637766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.258 [2024-10-11 12:03:11.637771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.258 [2024-10-11 12:03:11.637785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.258 qpair failed and we were unable to recover it. 00:29:27.258 [2024-10-11 12:03:11.647851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.258 [2024-10-11 12:03:11.647915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.258 [2024-10-11 12:03:11.647930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.258 [2024-10-11 12:03:11.647936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.258 [2024-10-11 12:03:11.647940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.258 [2024-10-11 12:03:11.647954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.258 qpair failed and we were unable to recover it. 00:29:27.258 [2024-10-11 12:03:11.657890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.258 [2024-10-11 12:03:11.657962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.258 [2024-10-11 12:03:11.657980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.258 [2024-10-11 12:03:11.657985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.258 [2024-10-11 12:03:11.657990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.258 [2024-10-11 12:03:11.658004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.258 qpair failed and we were unable to recover it. 00:29:27.258 [2024-10-11 12:03:11.667924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.258 [2024-10-11 12:03:11.668014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.258 [2024-10-11 12:03:11.668030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.258 [2024-10-11 12:03:11.668035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.258 [2024-10-11 12:03:11.668039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.258 [2024-10-11 12:03:11.668053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.258 qpair failed and we were unable to recover it. 00:29:27.258 [2024-10-11 12:03:11.677913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.258 [2024-10-11 12:03:11.677973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.258 [2024-10-11 12:03:11.677988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.677993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.677998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.678011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.687980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.688042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.688057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.688062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.688066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.688080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.698024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.698097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.698110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.698116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.698126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.698139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.708032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.708102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.708118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.708124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.708128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.708142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.718056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.718118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.718136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.718142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.718146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.718160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.728068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.728132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.728146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.728151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.728156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.728169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.738150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.738222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.738236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.738242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.738246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.738260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.748130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.748200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.748215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.748220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.748224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.748238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.758174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.758263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.758279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.758285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.758289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.758303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.768230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.768299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.768314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.768320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.768324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.768337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.778279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.778351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.778365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.778371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.778375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.778388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.788275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.788335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.788349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.788360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.788364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.788378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.798302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.798383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.798399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.798404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.798409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.798423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.808315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.808386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.808421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.808428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.259 [2024-10-11 12:03:11.808433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.259 [2024-10-11 12:03:11.808452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.259 qpair failed and we were unable to recover it. 00:29:27.259 [2024-10-11 12:03:11.818407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.259 [2024-10-11 12:03:11.818478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.259 [2024-10-11 12:03:11.818511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.259 [2024-10-11 12:03:11.818518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.260 [2024-10-11 12:03:11.818523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.260 [2024-10-11 12:03:11.818543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.260 qpair failed and we were unable to recover it. 00:29:27.260 [2024-10-11 12:03:11.828366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.260 [2024-10-11 12:03:11.828424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.260 [2024-10-11 12:03:11.828443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.260 [2024-10-11 12:03:11.828449] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.260 [2024-10-11 12:03:11.828454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.260 [2024-10-11 12:03:11.828469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.260 qpair failed and we were unable to recover it. 00:29:27.260 [2024-10-11 12:03:11.838414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.260 [2024-10-11 12:03:11.838507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.260 [2024-10-11 12:03:11.838525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.260 [2024-10-11 12:03:11.838531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.260 [2024-10-11 12:03:11.838536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.260 [2024-10-11 12:03:11.838551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.260 qpair failed and we were unable to recover it. 00:29:27.260 [2024-10-11 12:03:11.848437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.260 [2024-10-11 12:03:11.848502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.260 [2024-10-11 12:03:11.848518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.260 [2024-10-11 12:03:11.848523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.260 [2024-10-11 12:03:11.848528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.260 [2024-10-11 12:03:11.848542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.260 qpair failed and we were unable to recover it. 00:29:27.260 [2024-10-11 12:03:11.858532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.260 [2024-10-11 12:03:11.858601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.260 [2024-10-11 12:03:11.858618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.260 [2024-10-11 12:03:11.858624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.260 [2024-10-11 12:03:11.858629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.260 [2024-10-11 12:03:11.858643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.260 qpair failed and we were unable to recover it. 00:29:27.260 [2024-10-11 12:03:11.868523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.260 [2024-10-11 12:03:11.868637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.260 [2024-10-11 12:03:11.868652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.260 [2024-10-11 12:03:11.868657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.260 [2024-10-11 12:03:11.868662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.260 [2024-10-11 12:03:11.868682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.260 qpair failed and we were unable to recover it. 00:29:27.260 [2024-10-11 12:03:11.878560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.260 [2024-10-11 12:03:11.878623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.260 [2024-10-11 12:03:11.878641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.260 [2024-10-11 12:03:11.878654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.260 [2024-10-11 12:03:11.878660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.260 [2024-10-11 12:03:11.878684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.260 qpair failed and we were unable to recover it. 00:29:27.260 [2024-10-11 12:03:11.888577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.523 [2024-10-11 12:03:11.888640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.523 [2024-10-11 12:03:11.888657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.523 [2024-10-11 12:03:11.888666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.523 [2024-10-11 12:03:11.888682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.523 [2024-10-11 12:03:11.888697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.523 qpair failed and we were unable to recover it. 00:29:27.523 [2024-10-11 12:03:11.898648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.523 [2024-10-11 12:03:11.898721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.523 [2024-10-11 12:03:11.898735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.523 [2024-10-11 12:03:11.898740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.523 [2024-10-11 12:03:11.898745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.523 [2024-10-11 12:03:11.898758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.523 qpair failed and we were unable to recover it. 00:29:27.523 [2024-10-11 12:03:11.908642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.523 [2024-10-11 12:03:11.908740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.523 [2024-10-11 12:03:11.908757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.523 [2024-10-11 12:03:11.908762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.523 [2024-10-11 12:03:11.908767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.523 [2024-10-11 12:03:11.908781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.523 qpair failed and we were unable to recover it. 00:29:27.523 [2024-10-11 12:03:11.918655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.523 [2024-10-11 12:03:11.918719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.523 [2024-10-11 12:03:11.918735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.523 [2024-10-11 12:03:11.918740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.523 [2024-10-11 12:03:11.918745] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.523 [2024-10-11 12:03:11.918759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.523 qpair failed and we were unable to recover it. 00:29:27.523 [2024-10-11 12:03:11.928712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.523 [2024-10-11 12:03:11.928776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.523 [2024-10-11 12:03:11.928797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.523 [2024-10-11 12:03:11.928803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.523 [2024-10-11 12:03:11.928807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.523 [2024-10-11 12:03:11.928823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.523 qpair failed and we were unable to recover it. 00:29:27.523 [2024-10-11 12:03:11.938762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.523 [2024-10-11 12:03:11.938841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.523 [2024-10-11 12:03:11.938856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.523 [2024-10-11 12:03:11.938861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.523 [2024-10-11 12:03:11.938866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.523 [2024-10-11 12:03:11.938881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.523 qpair failed and we were unable to recover it. 00:29:27.523 [2024-10-11 12:03:11.948794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.523 [2024-10-11 12:03:11.948893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.523 [2024-10-11 12:03:11.948909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.523 [2024-10-11 12:03:11.948914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.523 [2024-10-11 12:03:11.948919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.523 [2024-10-11 12:03:11.948934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.523 qpair failed and we were unable to recover it. 00:29:27.523 [2024-10-11 12:03:11.958777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.523 [2024-10-11 12:03:11.958865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.523 [2024-10-11 12:03:11.958882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.523 [2024-10-11 12:03:11.958888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.523 [2024-10-11 12:03:11.958892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.523 [2024-10-11 12:03:11.958907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.523 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:11.968837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:11.968903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:11.968917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:11.968928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:11.968932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:11.968946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:11.978886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:11.978982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:11.978996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:11.979001] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:11.979006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:11.979020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:11.988929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:11.989015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:11.989031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:11.989037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:11.989044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:11.989057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:11.998881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:11.998938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:11.998953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:11.998958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:11.998963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:11.998976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:12.008945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:12.009010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:12.009025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:12.009030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:12.009035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:12.009048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:12.019002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:12.019066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:12.019080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:12.019085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:12.019090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:12.019103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:12.028860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:12.028924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:12.028938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:12.028944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:12.028948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:12.028961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:12.039003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:12.039073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:12.039087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:12.039092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:12.039097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:12.039110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:12.049067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:12.049167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:12.049187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:12.049192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:12.049197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:12.049211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:12.059116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:12.059185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:12.059201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:12.059211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:12.059216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.524 [2024-10-11 12:03:12.059230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.524 qpair failed and we were unable to recover it. 00:29:27.524 [2024-10-11 12:03:12.069102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.524 [2024-10-11 12:03:12.069194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.524 [2024-10-11 12:03:12.069209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.524 [2024-10-11 12:03:12.069215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.524 [2024-10-11 12:03:12.069219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.525 [2024-10-11 12:03:12.069233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.525 qpair failed and we were unable to recover it. 00:29:27.525 [2024-10-11 12:03:12.079121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.525 [2024-10-11 12:03:12.079188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.525 [2024-10-11 12:03:12.079202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.525 [2024-10-11 12:03:12.079208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.525 [2024-10-11 12:03:12.079212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.525 [2024-10-11 12:03:12.079225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.525 qpair failed and we were unable to recover it. 00:29:27.525 [2024-10-11 12:03:12.089186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.525 [2024-10-11 12:03:12.089248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.525 [2024-10-11 12:03:12.089262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.525 [2024-10-11 12:03:12.089267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.525 [2024-10-11 12:03:12.089272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.525 [2024-10-11 12:03:12.089285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.525 qpair failed and we were unable to recover it. 00:29:27.525 [2024-10-11 12:03:12.099230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.525 [2024-10-11 12:03:12.099304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.525 [2024-10-11 12:03:12.099318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.525 [2024-10-11 12:03:12.099323] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.525 [2024-10-11 12:03:12.099328] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.525 [2024-10-11 12:03:12.099342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.525 qpair failed and we were unable to recover it. 00:29:27.525 [2024-10-11 12:03:12.109239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.525 [2024-10-11 12:03:12.109295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.525 [2024-10-11 12:03:12.109312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.525 [2024-10-11 12:03:12.109317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.525 [2024-10-11 12:03:12.109322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.525 [2024-10-11 12:03:12.109335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.525 qpair failed and we were unable to recover it. 00:29:27.525 [2024-10-11 12:03:12.119310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.525 [2024-10-11 12:03:12.119402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.525 [2024-10-11 12:03:12.119417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.525 [2024-10-11 12:03:12.119422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.525 [2024-10-11 12:03:12.119427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.525 [2024-10-11 12:03:12.119440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.525 qpair failed and we were unable to recover it. 00:29:27.525 [2024-10-11 12:03:12.129225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.525 [2024-10-11 12:03:12.129293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.525 [2024-10-11 12:03:12.129308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.525 [2024-10-11 12:03:12.129313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.525 [2024-10-11 12:03:12.129318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.525 [2024-10-11 12:03:12.129331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.525 qpair failed and we were unable to recover it. 00:29:27.525 [2024-10-11 12:03:12.139353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.525 [2024-10-11 12:03:12.139421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.525 [2024-10-11 12:03:12.139438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.525 [2024-10-11 12:03:12.139443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.525 [2024-10-11 12:03:12.139448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.525 [2024-10-11 12:03:12.139462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.525 qpair failed and we were unable to recover it. 00:29:27.525 [2024-10-11 12:03:12.149357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.525 [2024-10-11 12:03:12.149436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.525 [2024-10-11 12:03:12.149476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.525 [2024-10-11 12:03:12.149483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.525 [2024-10-11 12:03:12.149488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.525 [2024-10-11 12:03:12.149509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.525 qpair failed and we were unable to recover it. 00:29:27.788 [2024-10-11 12:03:12.159371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.788 [2024-10-11 12:03:12.159437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.788 [2024-10-11 12:03:12.159470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.788 [2024-10-11 12:03:12.159477] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.788 [2024-10-11 12:03:12.159482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.788 [2024-10-11 12:03:12.159502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.788 qpair failed and we were unable to recover it. 00:29:27.788 [2024-10-11 12:03:12.169412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.788 [2024-10-11 12:03:12.169514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.788 [2024-10-11 12:03:12.169548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.788 [2024-10-11 12:03:12.169555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.788 [2024-10-11 12:03:12.169560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.788 [2024-10-11 12:03:12.169581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.788 qpair failed and we were unable to recover it. 00:29:27.788 [2024-10-11 12:03:12.179384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.788 [2024-10-11 12:03:12.179480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.788 [2024-10-11 12:03:12.179498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.788 [2024-10-11 12:03:12.179503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.788 [2024-10-11 12:03:12.179508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.788 [2024-10-11 12:03:12.179523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.788 qpair failed and we were unable to recover it. 00:29:27.788 [2024-10-11 12:03:12.189481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.788 [2024-10-11 12:03:12.189548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.788 [2024-10-11 12:03:12.189563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.788 [2024-10-11 12:03:12.189568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.788 [2024-10-11 12:03:12.189573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.788 [2024-10-11 12:03:12.189587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.788 qpair failed and we were unable to recover it. 00:29:27.788 [2024-10-11 12:03:12.199503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.788 [2024-10-11 12:03:12.199561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.788 [2024-10-11 12:03:12.199576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.788 [2024-10-11 12:03:12.199582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.788 [2024-10-11 12:03:12.199587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.788 [2024-10-11 12:03:12.199601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.788 qpair failed and we were unable to recover it. 00:29:27.788 [2024-10-11 12:03:12.209395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.788 [2024-10-11 12:03:12.209456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.788 [2024-10-11 12:03:12.209473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.788 [2024-10-11 12:03:12.209478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.788 [2024-10-11 12:03:12.209483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.788 [2024-10-11 12:03:12.209497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.788 qpair failed and we were unable to recover it. 00:29:27.788 [2024-10-11 12:03:12.219575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.788 [2024-10-11 12:03:12.219645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.788 [2024-10-11 12:03:12.219659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.788 [2024-10-11 12:03:12.219664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.788 [2024-10-11 12:03:12.219672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.788 [2024-10-11 12:03:12.219686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.788 qpair failed and we were unable to recover it. 00:29:27.788 [2024-10-11 12:03:12.229562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.788 [2024-10-11 12:03:12.229621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.788 [2024-10-11 12:03:12.229637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.788 [2024-10-11 12:03:12.229642] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.788 [2024-10-11 12:03:12.229647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.229661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.239645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.239711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.239736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.239741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.239746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.239761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.249656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.249724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.249749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.249755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.249760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.249773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.259768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.259837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.259853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.259858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.259863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.259876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.269714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.269775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.269789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.269794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.269799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.269812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.279743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.279802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.279817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.279823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.279827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.279841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.289742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.289809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.289824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.289830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.289834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.289848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.299822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.299899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.299913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.299919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.299923] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.299937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.309809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.309877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.309894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.309899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.309903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.309918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.319833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.319891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.319905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.319911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.319915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.319928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.330030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.330104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.330124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.330129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.330133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.330147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.340033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.340104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.340123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.340128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.340133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.340148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.350004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.350063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.350077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.350083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.350087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.350100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.360028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.360094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.360111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.360116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.360120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.360134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.789 [2024-10-11 12:03:12.369990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.789 [2024-10-11 12:03:12.370056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.789 [2024-10-11 12:03:12.370070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.789 [2024-10-11 12:03:12.370075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.789 [2024-10-11 12:03:12.370079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.789 [2024-10-11 12:03:12.370098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.789 qpair failed and we were unable to recover it. 00:29:27.790 [2024-10-11 12:03:12.380078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.790 [2024-10-11 12:03:12.380150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.790 [2024-10-11 12:03:12.380164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.790 [2024-10-11 12:03:12.380169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.790 [2024-10-11 12:03:12.380173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.790 [2024-10-11 12:03:12.380186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.790 qpair failed and we were unable to recover it. 00:29:27.790 [2024-10-11 12:03:12.390071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.790 [2024-10-11 12:03:12.390135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.790 [2024-10-11 12:03:12.390149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.790 [2024-10-11 12:03:12.390155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.790 [2024-10-11 12:03:12.390160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.790 [2024-10-11 12:03:12.390172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.790 qpair failed and we were unable to recover it. 00:29:27.790 [2024-10-11 12:03:12.400145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.790 [2024-10-11 12:03:12.400211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.790 [2024-10-11 12:03:12.400226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.790 [2024-10-11 12:03:12.400231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.790 [2024-10-11 12:03:12.400236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.790 [2024-10-11 12:03:12.400249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.790 qpair failed and we were unable to recover it. 00:29:27.790 [2024-10-11 12:03:12.410151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:27.790 [2024-10-11 12:03:12.410244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:27.790 [2024-10-11 12:03:12.410259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:27.790 [2024-10-11 12:03:12.410265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:27.790 [2024-10-11 12:03:12.410269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:27.790 [2024-10-11 12:03:12.410282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.790 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.420195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.053 [2024-10-11 12:03:12.420274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.053 [2024-10-11 12:03:12.420293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.053 [2024-10-11 12:03:12.420299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.053 [2024-10-11 12:03:12.420303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.053 [2024-10-11 12:03:12.420317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.430074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.053 [2024-10-11 12:03:12.430140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.053 [2024-10-11 12:03:12.430155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.053 [2024-10-11 12:03:12.430160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.053 [2024-10-11 12:03:12.430164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.053 [2024-10-11 12:03:12.430178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.440239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.053 [2024-10-11 12:03:12.440306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.053 [2024-10-11 12:03:12.440319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.053 [2024-10-11 12:03:12.440324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.053 [2024-10-11 12:03:12.440329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.053 [2024-10-11 12:03:12.440342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.450277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.053 [2024-10-11 12:03:12.450339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.053 [2024-10-11 12:03:12.450353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.053 [2024-10-11 12:03:12.450358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.053 [2024-10-11 12:03:12.450362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.053 [2024-10-11 12:03:12.450376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.460340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.053 [2024-10-11 12:03:12.460411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.053 [2024-10-11 12:03:12.460445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.053 [2024-10-11 12:03:12.460452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.053 [2024-10-11 12:03:12.460457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.053 [2024-10-11 12:03:12.460483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.470263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.053 [2024-10-11 12:03:12.470325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.053 [2024-10-11 12:03:12.470358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.053 [2024-10-11 12:03:12.470365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.053 [2024-10-11 12:03:12.470370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.053 [2024-10-11 12:03:12.470390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.480200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.053 [2024-10-11 12:03:12.480255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.053 [2024-10-11 12:03:12.480273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.053 [2024-10-11 12:03:12.480279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.053 [2024-10-11 12:03:12.480283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.053 [2024-10-11 12:03:12.480298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.490345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.053 [2024-10-11 12:03:12.490405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.053 [2024-10-11 12:03:12.490423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.053 [2024-10-11 12:03:12.490428] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.053 [2024-10-11 12:03:12.490433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.053 [2024-10-11 12:03:12.490447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.500433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.053 [2024-10-11 12:03:12.500504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.053 [2024-10-11 12:03:12.500537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.053 [2024-10-11 12:03:12.500544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.053 [2024-10-11 12:03:12.500549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.053 [2024-10-11 12:03:12.500568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.053 qpair failed and we were unable to recover it. 00:29:28.053 [2024-10-11 12:03:12.510425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.510486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.510511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.510516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.510521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.510537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.520429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.520490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.520506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.520512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.520516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.520530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.530488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.530565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.530581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.530586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.530591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.530604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.540487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.540556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.540570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.540575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.540580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.540593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.550409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.550475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.550498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.550503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.550508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.550531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.560551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.560606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.560626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.560632] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.560636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.560651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.570599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.570662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.570681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.570686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.570691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.570706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.580663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.580735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.580749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.580755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.580759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.580773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.590653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.590716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.590731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.590736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.590740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.590754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.600704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.600762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.600787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.600792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.600797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.600811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.610704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.610768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.610784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.610789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.610793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.610807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.620773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.620839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.620860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.620866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.620871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.620887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.630748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.630809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.630824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.630830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.054 [2024-10-11 12:03:12.630834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.054 [2024-10-11 12:03:12.630849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.054 qpair failed and we were unable to recover it. 00:29:28.054 [2024-10-11 12:03:12.640671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.054 [2024-10-11 12:03:12.640729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.054 [2024-10-11 12:03:12.640743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.054 [2024-10-11 12:03:12.640748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.055 [2024-10-11 12:03:12.640758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.055 [2024-10-11 12:03:12.640772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-10-11 12:03:12.650849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.055 [2024-10-11 12:03:12.650911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.055 [2024-10-11 12:03:12.650925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.055 [2024-10-11 12:03:12.650930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.055 [2024-10-11 12:03:12.650934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.055 [2024-10-11 12:03:12.650948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-10-11 12:03:12.660835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.055 [2024-10-11 12:03:12.660900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.055 [2024-10-11 12:03:12.660915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.055 [2024-10-11 12:03:12.660920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.055 [2024-10-11 12:03:12.660925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.055 [2024-10-11 12:03:12.660939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-10-11 12:03:12.670759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.055 [2024-10-11 12:03:12.670813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.055 [2024-10-11 12:03:12.670826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.055 [2024-10-11 12:03:12.670831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.055 [2024-10-11 12:03:12.670836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.055 [2024-10-11 12:03:12.670848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.055 [2024-10-11 12:03:12.680909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.055 [2024-10-11 12:03:12.680969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.055 [2024-10-11 12:03:12.680982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.055 [2024-10-11 12:03:12.680988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.055 [2024-10-11 12:03:12.680992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.055 [2024-10-11 12:03:12.681005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.055 qpair failed and we were unable to recover it. 00:29:28.317 [2024-10-11 12:03:12.690965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.317 [2024-10-11 12:03:12.691033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.317 [2024-10-11 12:03:12.691046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.317 [2024-10-11 12:03:12.691051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.317 [2024-10-11 12:03:12.691056] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.317 [2024-10-11 12:03:12.691067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.317 qpair failed and we were unable to recover it. 00:29:28.317 [2024-10-11 12:03:12.700960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.317 [2024-10-11 12:03:12.701015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.317 [2024-10-11 12:03:12.701027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.317 [2024-10-11 12:03:12.701032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.317 [2024-10-11 12:03:12.701036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.317 [2024-10-11 12:03:12.701048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.317 qpair failed and we were unable to recover it. 00:29:28.317 [2024-10-11 12:03:12.710958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.317 [2024-10-11 12:03:12.711008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.317 [2024-10-11 12:03:12.711021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.317 [2024-10-11 12:03:12.711027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.317 [2024-10-11 12:03:12.711031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.317 [2024-10-11 12:03:12.711043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.317 qpair failed and we were unable to recover it. 00:29:28.317 [2024-10-11 12:03:12.721048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.317 [2024-10-11 12:03:12.721112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.317 [2024-10-11 12:03:12.721124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.317 [2024-10-11 12:03:12.721129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.317 [2024-10-11 12:03:12.721133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.317 [2024-10-11 12:03:12.721145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.317 qpair failed and we were unable to recover it. 00:29:28.317 [2024-10-11 12:03:12.731031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.317 [2024-10-11 12:03:12.731108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.317 [2024-10-11 12:03:12.731119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.317 [2024-10-11 12:03:12.731124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.317 [2024-10-11 12:03:12.731133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.317 [2024-10-11 12:03:12.731144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.317 qpair failed and we were unable to recover it. 00:29:28.317 [2024-10-11 12:03:12.741055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.317 [2024-10-11 12:03:12.741103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.317 [2024-10-11 12:03:12.741115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.317 [2024-10-11 12:03:12.741120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.317 [2024-10-11 12:03:12.741124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.317 [2024-10-11 12:03:12.741135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.317 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.751130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.751178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.751190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.751195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.751199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.751210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.761124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.761172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.761186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.761191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.761196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.761207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.771138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.771195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.771206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.771210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.771215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.771226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.781146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.781201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.781212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.781217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.781221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.781232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.791183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.791257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.791269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.791273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.791278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.791289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.801221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.801272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.801282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.801287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.801291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.801301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.811256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.811308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.811319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.811324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.811329] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.811339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.821259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.821303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.821314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.821319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.821326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.821337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.831171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.831225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.831238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.831243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.831247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.831258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.841312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.841361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.841372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.841377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.841381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.841392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.851372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.851425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.851446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.851452] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.851456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.851471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.861357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.861405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.861425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.861432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.861437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.861451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.871386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.871434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.871446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.871451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.871456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.871467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.881430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.318 [2024-10-11 12:03:12.881479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.318 [2024-10-11 12:03:12.881499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.318 [2024-10-11 12:03:12.881505] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.318 [2024-10-11 12:03:12.881510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.318 [2024-10-11 12:03:12.881524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.318 qpair failed and we were unable to recover it. 00:29:28.318 [2024-10-11 12:03:12.891475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.319 [2024-10-11 12:03:12.891531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.319 [2024-10-11 12:03:12.891552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.319 [2024-10-11 12:03:12.891558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.319 [2024-10-11 12:03:12.891562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.319 [2024-10-11 12:03:12.891577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.319 qpair failed and we were unable to recover it. 00:29:28.319 [2024-10-11 12:03:12.901465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.319 [2024-10-11 12:03:12.901511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.319 [2024-10-11 12:03:12.901523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.319 [2024-10-11 12:03:12.901528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.319 [2024-10-11 12:03:12.901532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.319 [2024-10-11 12:03:12.901543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.319 qpair failed and we were unable to recover it. 00:29:28.319 [2024-10-11 12:03:12.911490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.319 [2024-10-11 12:03:12.911537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.319 [2024-10-11 12:03:12.911549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.319 [2024-10-11 12:03:12.911555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.319 [2024-10-11 12:03:12.911563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.319 [2024-10-11 12:03:12.911574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.319 qpair failed and we were unable to recover it. 00:29:28.319 [2024-10-11 12:03:12.921505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.319 [2024-10-11 12:03:12.921551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.319 [2024-10-11 12:03:12.921561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.319 [2024-10-11 12:03:12.921566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.319 [2024-10-11 12:03:12.921570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.319 [2024-10-11 12:03:12.921580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.319 qpair failed and we were unable to recover it. 00:29:28.319 [2024-10-11 12:03:12.931575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.319 [2024-10-11 12:03:12.931623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.319 [2024-10-11 12:03:12.931633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.319 [2024-10-11 12:03:12.931638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.319 [2024-10-11 12:03:12.931642] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.319 [2024-10-11 12:03:12.931652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.319 qpair failed and we were unable to recover it. 00:29:28.319 [2024-10-11 12:03:12.941421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.319 [2024-10-11 12:03:12.941466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.319 [2024-10-11 12:03:12.941475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.319 [2024-10-11 12:03:12.941480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.319 [2024-10-11 12:03:12.941484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.319 [2024-10-11 12:03:12.941494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.319 qpair failed and we were unable to recover it. 00:29:28.581 [2024-10-11 12:03:12.951605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.581 [2024-10-11 12:03:12.951646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.581 [2024-10-11 12:03:12.951655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.581 [2024-10-11 12:03:12.951660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.581 [2024-10-11 12:03:12.951664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.581 [2024-10-11 12:03:12.951679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.581 qpair failed and we were unable to recover it. 00:29:28.581 [2024-10-11 12:03:12.961643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.581 [2024-10-11 12:03:12.961702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.581 [2024-10-11 12:03:12.961714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.581 [2024-10-11 12:03:12.961719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.581 [2024-10-11 12:03:12.961723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.581 [2024-10-11 12:03:12.961734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.581 qpair failed and we were unable to recover it. 00:29:28.581 [2024-10-11 12:03:12.971656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.581 [2024-10-11 12:03:12.971713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.581 [2024-10-11 12:03:12.971723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.581 [2024-10-11 12:03:12.971728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.581 [2024-10-11 12:03:12.971733] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.581 [2024-10-11 12:03:12.971743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.581 qpair failed and we were unable to recover it. 00:29:28.581 [2024-10-11 12:03:12.981663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.581 [2024-10-11 12:03:12.981713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.581 [2024-10-11 12:03:12.981722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.581 [2024-10-11 12:03:12.981727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.581 [2024-10-11 12:03:12.981731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.581 [2024-10-11 12:03:12.981741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.581 qpair failed and we were unable to recover it. 00:29:28.581 [2024-10-11 12:03:12.991640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.581 [2024-10-11 12:03:12.991682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.581 [2024-10-11 12:03:12.991691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.581 [2024-10-11 12:03:12.991696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.581 [2024-10-11 12:03:12.991700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.581 [2024-10-11 12:03:12.991710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.581 qpair failed and we were unable to recover it. 00:29:28.581 [2024-10-11 12:03:13.001748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.581 [2024-10-11 12:03:13.001799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.001811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.001819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.001824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.001835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.011822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.011879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.011891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.011896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.011901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.011912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.021759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.021803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.021814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.021819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.021823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.021834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.031773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.031819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.031830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.031834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.031839] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.031849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.041832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.041883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.041893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.041897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.041902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.041912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.051866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.051916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.051925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.051930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.051934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.051944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.061905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.061950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.061960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.061965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.061969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.061979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.071909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.071953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.071962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.071967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.071971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.071981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.081998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.082048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.082057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.082062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.082066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.082076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.091964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.092012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.092021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.092029] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.092033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.092043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.101974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.102027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.102037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.102042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.102046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.102055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.112010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.112068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.112078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.112083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.112087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.112097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.122074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.122119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.122129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.122134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.122138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.122147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.132070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.132118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.132128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.132133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.132138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.582 [2024-10-11 12:03:13.132148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.582 qpair failed and we were unable to recover it. 00:29:28.582 [2024-10-11 12:03:13.142100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.582 [2024-10-11 12:03:13.142147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.582 [2024-10-11 12:03:13.142157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.582 [2024-10-11 12:03:13.142161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.582 [2024-10-11 12:03:13.142165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.583 [2024-10-11 12:03:13.142175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.583 qpair failed and we were unable to recover it. 00:29:28.583 [2024-10-11 12:03:13.152121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.583 [2024-10-11 12:03:13.152170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.583 [2024-10-11 12:03:13.152179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.583 [2024-10-11 12:03:13.152184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.583 [2024-10-11 12:03:13.152188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.583 [2024-10-11 12:03:13.152198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.583 qpair failed and we were unable to recover it. 00:29:28.583 [2024-10-11 12:03:13.162178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.583 [2024-10-11 12:03:13.162254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.583 [2024-10-11 12:03:13.162265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.583 [2024-10-11 12:03:13.162270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.583 [2024-10-11 12:03:13.162274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.583 [2024-10-11 12:03:13.162284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.583 qpair failed and we were unable to recover it. 00:29:28.583 [2024-10-11 12:03:13.172226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.583 [2024-10-11 12:03:13.172276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.583 [2024-10-11 12:03:13.172286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.583 [2024-10-11 12:03:13.172291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.583 [2024-10-11 12:03:13.172295] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.583 [2024-10-11 12:03:13.172305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.583 qpair failed and we were unable to recover it. 00:29:28.583 [2024-10-11 12:03:13.182164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.583 [2024-10-11 12:03:13.182207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.583 [2024-10-11 12:03:13.182217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.583 [2024-10-11 12:03:13.182224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.583 [2024-10-11 12:03:13.182228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.583 [2024-10-11 12:03:13.182238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.583 qpair failed and we were unable to recover it. 00:29:28.583 [2024-10-11 12:03:13.192090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.583 [2024-10-11 12:03:13.192134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.583 [2024-10-11 12:03:13.192145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.583 [2024-10-11 12:03:13.192150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.583 [2024-10-11 12:03:13.192154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.583 [2024-10-11 12:03:13.192165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.583 qpair failed and we were unable to recover it. 00:29:28.583 [2024-10-11 12:03:13.202255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.583 [2024-10-11 12:03:13.202310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.583 [2024-10-11 12:03:13.202324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.583 [2024-10-11 12:03:13.202329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.583 [2024-10-11 12:03:13.202333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.583 [2024-10-11 12:03:13.202344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.583 qpair failed and we were unable to recover it. 00:29:28.845 [2024-10-11 12:03:13.212301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.845 [2024-10-11 12:03:13.212356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.845 [2024-10-11 12:03:13.212366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.845 [2024-10-11 12:03:13.212371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.845 [2024-10-11 12:03:13.212376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.845 [2024-10-11 12:03:13.212386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.845 qpair failed and we were unable to recover it. 00:29:28.845 [2024-10-11 12:03:13.222338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.845 [2024-10-11 12:03:13.222393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.845 [2024-10-11 12:03:13.222403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.845 [2024-10-11 12:03:13.222408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.845 [2024-10-11 12:03:13.222412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.845 [2024-10-11 12:03:13.222422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.845 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.232241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.232286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.232305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.232311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.232317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.232330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.242406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.242449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.242461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.242466] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.242471] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.242481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.252316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.252367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.252378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.252383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.252387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.252397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.262398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.262448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.262459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.262464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.262468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.262479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.272442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.272497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.272507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.272515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.272520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.272530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.282515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.282560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.282570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.282575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.282579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.282590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.292537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.292587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.292596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.292601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.292606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.292615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.302506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.302547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.302557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.302562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.302566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.302576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.312555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.312600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.312610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.312615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.312619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.312629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.322622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.322715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.322725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.322730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.322735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.322745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.332642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.332694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.332704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.332709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.332714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.332724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.342662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.342707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.342717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.342722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.342726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.342737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.352663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.352707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.352717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.352722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.352727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.352737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.362596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.846 [2024-10-11 12:03:13.362647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.846 [2024-10-11 12:03:13.362660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.846 [2024-10-11 12:03:13.362665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.846 [2024-10-11 12:03:13.362675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.846 [2024-10-11 12:03:13.362686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.846 qpair failed and we were unable to recover it. 00:29:28.846 [2024-10-11 12:03:13.372762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.372813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.372824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.372829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.372833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.372843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.382739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.382790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.382801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.382806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.382810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.382821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.392787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.392829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.392839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.392844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.392848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.392858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.402821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.402914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.402924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.402928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.402932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.402942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.412876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.412924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.412935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.412940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.412944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.412954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.422866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.422916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.422925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.422930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.422934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.422944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.432859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.432900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.432909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.432914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.432918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.432928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.442954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.443022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.443032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.443037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.443041] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.443051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.452974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.453020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.453037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.453042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.453046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.453056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.462984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.463030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.463040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.463045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.463050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.463059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:28.847 [2024-10-11 12:03:13.473039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:28.847 [2024-10-11 12:03:13.473079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:28.847 [2024-10-11 12:03:13.473089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:28.847 [2024-10-11 12:03:13.473094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:28.847 [2024-10-11 12:03:13.473098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:28.847 [2024-10-11 12:03:13.473107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.847 qpair failed and we were unable to recover it. 00:29:29.119 [2024-10-11 12:03:13.483061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.119 [2024-10-11 12:03:13.483107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.119 [2024-10-11 12:03:13.483117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.119 [2024-10-11 12:03:13.483121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.119 [2024-10-11 12:03:13.483126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.119 [2024-10-11 12:03:13.483135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.119 qpair failed and we were unable to recover it. 00:29:29.119 [2024-10-11 12:03:13.493092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.119 [2024-10-11 12:03:13.493140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.119 [2024-10-11 12:03:13.493149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.119 [2024-10-11 12:03:13.493154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.119 [2024-10-11 12:03:13.493158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.119 [2024-10-11 12:03:13.493168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.119 qpair failed and we were unable to recover it. 00:29:29.119 [2024-10-11 12:03:13.502957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.120 [2024-10-11 12:03:13.503003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.120 [2024-10-11 12:03:13.503012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.120 [2024-10-11 12:03:13.503017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.120 [2024-10-11 12:03:13.503021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.120 [2024-10-11 12:03:13.503031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.120 qpair failed and we were unable to recover it. 00:29:29.120 [2024-10-11 12:03:13.513101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.120 [2024-10-11 12:03:13.513189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.120 [2024-10-11 12:03:13.513199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.120 [2024-10-11 12:03:13.513204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.120 [2024-10-11 12:03:13.513208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.120 [2024-10-11 12:03:13.513218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.120 qpair failed and we were unable to recover it. 00:29:29.120 [2024-10-11 12:03:13.523198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.120 [2024-10-11 12:03:13.523240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.121 [2024-10-11 12:03:13.523250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.121 [2024-10-11 12:03:13.523255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.121 [2024-10-11 12:03:13.523259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.121 [2024-10-11 12:03:13.523269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-10-11 12:03:13.533193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.121 [2024-10-11 12:03:13.533244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.121 [2024-10-11 12:03:13.533253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.121 [2024-10-11 12:03:13.533258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.121 [2024-10-11 12:03:13.533262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.121 [2024-10-11 12:03:13.533271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-10-11 12:03:13.543192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.121 [2024-10-11 12:03:13.543242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.121 [2024-10-11 12:03:13.543254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.121 [2024-10-11 12:03:13.543259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.121 [2024-10-11 12:03:13.543263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.121 [2024-10-11 12:03:13.543273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.121 [2024-10-11 12:03:13.553203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.121 [2024-10-11 12:03:13.553250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.121 [2024-10-11 12:03:13.553259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.121 [2024-10-11 12:03:13.553264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.121 [2024-10-11 12:03:13.553268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.121 [2024-10-11 12:03:13.553278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.121 qpair failed and we were unable to recover it. 00:29:29.122 [2024-10-11 12:03:13.563272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.122 [2024-10-11 12:03:13.563323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.122 [2024-10-11 12:03:13.563334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.122 [2024-10-11 12:03:13.563339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.122 [2024-10-11 12:03:13.563343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.122 [2024-10-11 12:03:13.563353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-10-11 12:03:13.573316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.122 [2024-10-11 12:03:13.573365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.122 [2024-10-11 12:03:13.573375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.122 [2024-10-11 12:03:13.573380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.122 [2024-10-11 12:03:13.573384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.122 [2024-10-11 12:03:13.573393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.122 qpair failed and we were unable to recover it. 00:29:29.122 [2024-10-11 12:03:13.583307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.122 [2024-10-11 12:03:13.583348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.122 [2024-10-11 12:03:13.583358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.124 [2024-10-11 12:03:13.583363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.124 [2024-10-11 12:03:13.583367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.124 [2024-10-11 12:03:13.583379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-10-11 12:03:13.593326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.124 [2024-10-11 12:03:13.593373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.124 [2024-10-11 12:03:13.593393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.124 [2024-10-11 12:03:13.593399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.124 [2024-10-11 12:03:13.593404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.124 [2024-10-11 12:03:13.593418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-10-11 12:03:13.603391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.124 [2024-10-11 12:03:13.603442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.124 [2024-10-11 12:03:13.603462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.124 [2024-10-11 12:03:13.603468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.124 [2024-10-11 12:03:13.603473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.124 [2024-10-11 12:03:13.603487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-10-11 12:03:13.613421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.124 [2024-10-11 12:03:13.613473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.124 [2024-10-11 12:03:13.613493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.124 [2024-10-11 12:03:13.613499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.124 [2024-10-11 12:03:13.613504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.124 [2024-10-11 12:03:13.613519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.124 [2024-10-11 12:03:13.623295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.124 [2024-10-11 12:03:13.623345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.124 [2024-10-11 12:03:13.623356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.124 [2024-10-11 12:03:13.623361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.124 [2024-10-11 12:03:13.623365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.124 [2024-10-11 12:03:13.623376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.124 qpair failed and we were unable to recover it. 00:29:29.125 [2024-10-11 12:03:13.633414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.125 [2024-10-11 12:03:13.633504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.125 [2024-10-11 12:03:13.633518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.125 [2024-10-11 12:03:13.633523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.125 [2024-10-11 12:03:13.633527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.125 [2024-10-11 12:03:13.633537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-10-11 12:03:13.643492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.125 [2024-10-11 12:03:13.643578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.125 [2024-10-11 12:03:13.643588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.125 [2024-10-11 12:03:13.643593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.125 [2024-10-11 12:03:13.643598] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.125 [2024-10-11 12:03:13.643607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-10-11 12:03:13.653525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.125 [2024-10-11 12:03:13.653575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.125 [2024-10-11 12:03:13.653585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.125 [2024-10-11 12:03:13.653590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.125 [2024-10-11 12:03:13.653594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.125 [2024-10-11 12:03:13.653604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.125 qpair failed and we were unable to recover it. 00:29:29.125 [2024-10-11 12:03:13.663395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.125 [2024-10-11 12:03:13.663444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.126 [2024-10-11 12:03:13.663455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.126 [2024-10-11 12:03:13.663460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.126 [2024-10-11 12:03:13.663464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.126 [2024-10-11 12:03:13.663474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-10-11 12:03:13.673493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.126 [2024-10-11 12:03:13.673565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.126 [2024-10-11 12:03:13.673575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.126 [2024-10-11 12:03:13.673580] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.126 [2024-10-11 12:03:13.673584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.126 [2024-10-11 12:03:13.673597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-10-11 12:03:13.683626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.126 [2024-10-11 12:03:13.683677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.126 [2024-10-11 12:03:13.683687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.126 [2024-10-11 12:03:13.683692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.126 [2024-10-11 12:03:13.683696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.126 [2024-10-11 12:03:13.683706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.126 qpair failed and we were unable to recover it. 00:29:29.126 [2024-10-11 12:03:13.693639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.126 [2024-10-11 12:03:13.693692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.126 [2024-10-11 12:03:13.693702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.126 [2024-10-11 12:03:13.693707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.126 [2024-10-11 12:03:13.693711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.127 [2024-10-11 12:03:13.693721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-10-11 12:03:13.703638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.127 [2024-10-11 12:03:13.703688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.127 [2024-10-11 12:03:13.703698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.127 [2024-10-11 12:03:13.703703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.127 [2024-10-11 12:03:13.703707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.127 [2024-10-11 12:03:13.703717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-10-11 12:03:13.713627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.127 [2024-10-11 12:03:13.713673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.127 [2024-10-11 12:03:13.713684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.127 [2024-10-11 12:03:13.713689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.127 [2024-10-11 12:03:13.713694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.127 [2024-10-11 12:03:13.713704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-10-11 12:03:13.723715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.127 [2024-10-11 12:03:13.723757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.127 [2024-10-11 12:03:13.723770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.127 [2024-10-11 12:03:13.723774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.127 [2024-10-11 12:03:13.723779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.127 [2024-10-11 12:03:13.723789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-10-11 12:03:13.733740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.127 [2024-10-11 12:03:13.733789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.127 [2024-10-11 12:03:13.733799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.127 [2024-10-11 12:03:13.733803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.127 [2024-10-11 12:03:13.733808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.127 [2024-10-11 12:03:13.733818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.127 [2024-10-11 12:03:13.743697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.127 [2024-10-11 12:03:13.743744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.127 [2024-10-11 12:03:13.743755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.127 [2024-10-11 12:03:13.743760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.127 [2024-10-11 12:03:13.743764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.127 [2024-10-11 12:03:13.743774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.127 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.753749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.753794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.753804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.753809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.753813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.753823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.763820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.763867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.763877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.763882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.763886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.763899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.773861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.773909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.773918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.773923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.773928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.773937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.783723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.783767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.783777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.783782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.783787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.783797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.793867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.793910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.793921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.793925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.793930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.793940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.803968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.804031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.804040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.804045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.804050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.804060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.813982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.814036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.814049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.814054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.814058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.814068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.823948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.823996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.824006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.824011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.824015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.824025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.833959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.833999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.834009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.834014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.834018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.834029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.844048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.844093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.844103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.844108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.844112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.844122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.854065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.854114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.854124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.854128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.854135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.854145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.864064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.864109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.864119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.864124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.864128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.864138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.874100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.874160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.874170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.389 [2024-10-11 12:03:13.874175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.389 [2024-10-11 12:03:13.874179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.389 [2024-10-11 12:03:13.874189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-10-11 12:03:13.884131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.389 [2024-10-11 12:03:13.884175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.389 [2024-10-11 12:03:13.884185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.884190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.884194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.884204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.894079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.894171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.894181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.894185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.894190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.894200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.904200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.904247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.904257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.904262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.904266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.904276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.914161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.914247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.914258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.914263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.914267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.914277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.924273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.924317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.924327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.924331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.924336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.924345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.934278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.934327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.934337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.934342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.934346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.934356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.944297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.944344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.944354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.944358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.944367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.944376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.954302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.954353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.954367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.954372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.954376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.954388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.964366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.964459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.964469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.964474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.964478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.964488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.974395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.974442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.974452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.974457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.974461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.974471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.984373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.984419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.984430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.984434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.984439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.984449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:13.994422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:13.994472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:13.994482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:13.994487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:13.994491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:13.994502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:14.004500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:14.004574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:14.004585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:14.004590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:14.004595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:14.004605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-10-11 12:03:14.014512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.390 [2024-10-11 12:03:14.014564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.390 [2024-10-11 12:03:14.014576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.390 [2024-10-11 12:03:14.014581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.390 [2024-10-11 12:03:14.014586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.390 [2024-10-11 12:03:14.014596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.024516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.024561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.024572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.024576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.024581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.024591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.034527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.034570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.034582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.034587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.034595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.034605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.044571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.044622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.044632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.044637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.044641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.044651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.054625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.054678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.054688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.054693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.054697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.054707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.064492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.064538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.064548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.064552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.064557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.064567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.074630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.074683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.074693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.074699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.074704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.074714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.084706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.084792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.084802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.084807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.084811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.084821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.094611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.094659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.094675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.094680] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.094684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.094695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.104703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.104743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.104754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.104759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.104763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.104773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.114743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.114789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.114799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.114804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.114808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.114819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.124828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.124872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.124882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.124887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.124894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.124904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.134850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.653 [2024-10-11 12:03:14.134898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.653 [2024-10-11 12:03:14.134908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.653 [2024-10-11 12:03:14.134913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.653 [2024-10-11 12:03:14.134917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.653 [2024-10-11 12:03:14.134927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.653 qpair failed and we were unable to recover it. 00:29:29.653 [2024-10-11 12:03:14.144841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.144887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.144896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.144901] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.144905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.144915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.154849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.154892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.154901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.154906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.154910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.154920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.164935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.165020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.165030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.165035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.165039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.165049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.174924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.174980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.174990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.174994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.174999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.175008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.184952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.185005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.185015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.185019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.185024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.185033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.195012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.195069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.195079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.195084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.195088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.195098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.205021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.205091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.205100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.205105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.205110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.205119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.215072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.215123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.215134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.215142] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.215146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.215157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.225072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.225114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.225126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.225131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.225136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.225146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.235085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.235131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.235141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.235146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.235150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.235161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.245160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.245210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.245219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.245224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.245229] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.245239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.255193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.255241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.255250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.255255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.255259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.255269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.265182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.265224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.265235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.265240] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.265244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.265254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.654 [2024-10-11 12:03:14.275184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.654 [2024-10-11 12:03:14.275227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.654 [2024-10-11 12:03:14.275237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.654 [2024-10-11 12:03:14.275241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.654 [2024-10-11 12:03:14.275246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.654 [2024-10-11 12:03:14.275255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.654 qpair failed and we were unable to recover it. 00:29:29.917 [2024-10-11 12:03:14.285247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.917 [2024-10-11 12:03:14.285288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.917 [2024-10-11 12:03:14.285298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.917 [2024-10-11 12:03:14.285303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.917 [2024-10-11 12:03:14.285307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.917 [2024-10-11 12:03:14.285317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.917 qpair failed and we were unable to recover it. 00:29:29.917 [2024-10-11 12:03:14.295293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.917 [2024-10-11 12:03:14.295342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.917 [2024-10-11 12:03:14.295352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.917 [2024-10-11 12:03:14.295357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.917 [2024-10-11 12:03:14.295361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.917 [2024-10-11 12:03:14.295371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.917 qpair failed and we were unable to recover it. 00:29:29.917 [2024-10-11 12:03:14.305285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.917 [2024-10-11 12:03:14.305330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.917 [2024-10-11 12:03:14.305340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.917 [2024-10-11 12:03:14.305348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.917 [2024-10-11 12:03:14.305353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.917 [2024-10-11 12:03:14.305362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.917 qpair failed and we were unable to recover it. 00:29:29.917 [2024-10-11 12:03:14.315297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.917 [2024-10-11 12:03:14.315344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.917 [2024-10-11 12:03:14.315354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.917 [2024-10-11 12:03:14.315359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.917 [2024-10-11 12:03:14.315363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.917 [2024-10-11 12:03:14.315373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.917 qpair failed and we were unable to recover it. 00:29:29.917 [2024-10-11 12:03:14.325238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.917 [2024-10-11 12:03:14.325292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.917 [2024-10-11 12:03:14.325301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.917 [2024-10-11 12:03:14.325306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.917 [2024-10-11 12:03:14.325310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.917 [2024-10-11 12:03:14.325320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.917 qpair failed and we were unable to recover it. 00:29:29.917 [2024-10-11 12:03:14.335404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.917 [2024-10-11 12:03:14.335454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.917 [2024-10-11 12:03:14.335464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.917 [2024-10-11 12:03:14.335468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.917 [2024-10-11 12:03:14.335473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.917 [2024-10-11 12:03:14.335482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.917 qpair failed and we were unable to recover it. 00:29:29.917 [2024-10-11 12:03:14.345390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.917 [2024-10-11 12:03:14.345437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.917 [2024-10-11 12:03:14.345458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.917 [2024-10-11 12:03:14.345464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.917 [2024-10-11 12:03:14.345468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.917 [2024-10-11 12:03:14.345482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.917 qpair failed and we were unable to recover it. 00:29:29.917 [2024-10-11 12:03:14.355414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.917 [2024-10-11 12:03:14.355497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.917 [2024-10-11 12:03:14.355510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.917 [2024-10-11 12:03:14.355515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.917 [2024-10-11 12:03:14.355519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.917 [2024-10-11 12:03:14.355530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.917 qpair failed and we were unable to recover it. 00:29:29.917 [2024-10-11 12:03:14.365435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.365489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.365509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.365515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.365520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.365536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.375503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.375551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.375563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.375569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.375573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.375584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.385478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.385520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.385531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.385535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.385540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.385550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.395527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.395569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.395579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.395587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.395591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.395602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.405591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.405683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.405694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.405699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.405703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.405713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.415621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.415673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.415684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.415689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.415693] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.415704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.425656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.425741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.425753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.425758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.425762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.425773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.435604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.435661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.435674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.435679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.435683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.435693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.445691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.445758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.445768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.445773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.445778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.445788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.455723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.455800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.455810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.455815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.455819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.455829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.465722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.465769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.465779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.465784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.465788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.465798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.475724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.475799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.475809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.475814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.475818] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.475828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.485813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.485872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.485882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.485890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.485894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.485903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.918 [2024-10-11 12:03:14.495837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.918 [2024-10-11 12:03:14.495885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.918 [2024-10-11 12:03:14.495895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.918 [2024-10-11 12:03:14.495899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.918 [2024-10-11 12:03:14.495904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.918 [2024-10-11 12:03:14.495914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.918 qpair failed and we were unable to recover it. 00:29:29.919 [2024-10-11 12:03:14.505808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.919 [2024-10-11 12:03:14.505850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.919 [2024-10-11 12:03:14.505859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.919 [2024-10-11 12:03:14.505864] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.919 [2024-10-11 12:03:14.505868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.919 [2024-10-11 12:03:14.505878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.919 qpair failed and we were unable to recover it. 00:29:29.919 [2024-10-11 12:03:14.515722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.919 [2024-10-11 12:03:14.515765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.919 [2024-10-11 12:03:14.515777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.919 [2024-10-11 12:03:14.515782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.919 [2024-10-11 12:03:14.515786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.919 [2024-10-11 12:03:14.515797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.919 qpair failed and we were unable to recover it. 00:29:29.919 [2024-10-11 12:03:14.525930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.919 [2024-10-11 12:03:14.525974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.919 [2024-10-11 12:03:14.525985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.919 [2024-10-11 12:03:14.525989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.919 [2024-10-11 12:03:14.525994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.919 [2024-10-11 12:03:14.526004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.919 qpair failed and we were unable to recover it. 00:29:29.919 [2024-10-11 12:03:14.535961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.919 [2024-10-11 12:03:14.536008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.919 [2024-10-11 12:03:14.536017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.919 [2024-10-11 12:03:14.536022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.919 [2024-10-11 12:03:14.536026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.919 [2024-10-11 12:03:14.536036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.919 qpair failed and we were unable to recover it. 00:29:29.919 [2024-10-11 12:03:14.545966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:29.919 [2024-10-11 12:03:14.546012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:29.919 [2024-10-11 12:03:14.546023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:29.919 [2024-10-11 12:03:14.546027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:29.919 [2024-10-11 12:03:14.546031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:29.919 [2024-10-11 12:03:14.546042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.919 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.555972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.556040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.556049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.556054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.556058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.556068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.566009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.566059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.566069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.566074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.566078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.566088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.575935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.576031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.576043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.576048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.576053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.576063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.586043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.586089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.586099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.586104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.586108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.586118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.596061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.596109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.596118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.596123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.596127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.596137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.606116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.606164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.606174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.606179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.606183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.606193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.616155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.616234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.616246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.616251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.616255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.616266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.626207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.626280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.626292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.626296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.626300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.626311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.636194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.636233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.636242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.636247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.636251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.636261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.181 [2024-10-11 12:03:14.646231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.181 [2024-10-11 12:03:14.646278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.181 [2024-10-11 12:03:14.646288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.181 [2024-10-11 12:03:14.646293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.181 [2024-10-11 12:03:14.646297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.181 [2024-10-11 12:03:14.646307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.181 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.656295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.656376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.656385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.656390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.656394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.656404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.666244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.666286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.666300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.666305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.666310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.666320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.676265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.676312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.676322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.676326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.676331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.676341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.686360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.686448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.686458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.686462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.686467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.686476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.696403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.696465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.696474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.696479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.696483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.696493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.706381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.706427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.706437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.706442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.706446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.706458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.716275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.716327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.716337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.716342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.716346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.716356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.726470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.726517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.726527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.726532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.726536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.726546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.736530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.736584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.736604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.736610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.736614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.736628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.746515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.746597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.746608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.746613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.746618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.746629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.756519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.756559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.756572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.756577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.756581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.756592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.766571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.766618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.766628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.766633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.766638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.766648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.776519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.776566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.776576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.776581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.776585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.776595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.786610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.182 [2024-10-11 12:03:14.786688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.182 [2024-10-11 12:03:14.786699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.182 [2024-10-11 12:03:14.786703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.182 [2024-10-11 12:03:14.786708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.182 [2024-10-11 12:03:14.786718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.182 qpair failed and we were unable to recover it. 00:29:30.182 [2024-10-11 12:03:14.796671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.183 [2024-10-11 12:03:14.796745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.183 [2024-10-11 12:03:14.796755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.183 [2024-10-11 12:03:14.796760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.183 [2024-10-11 12:03:14.796764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.183 [2024-10-11 12:03:14.796777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.183 [2024-10-11 12:03:14.806660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.183 [2024-10-11 12:03:14.806710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.183 [2024-10-11 12:03:14.806720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.183 [2024-10-11 12:03:14.806725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.183 [2024-10-11 12:03:14.806729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.183 [2024-10-11 12:03:14.806739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.183 qpair failed and we were unable to recover it. 00:29:30.445 [2024-10-11 12:03:14.816760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.445 [2024-10-11 12:03:14.816856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.445 [2024-10-11 12:03:14.816867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.445 [2024-10-11 12:03:14.816872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.445 [2024-10-11 12:03:14.816877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.445 [2024-10-11 12:03:14.816886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-10-11 12:03:14.826590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.445 [2024-10-11 12:03:14.826634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.445 [2024-10-11 12:03:14.826644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.445 [2024-10-11 12:03:14.826648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.445 [2024-10-11 12:03:14.826653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.445 [2024-10-11 12:03:14.826662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-10-11 12:03:14.836587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.445 [2024-10-11 12:03:14.836627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.445 [2024-10-11 12:03:14.836638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.445 [2024-10-11 12:03:14.836643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.445 [2024-10-11 12:03:14.836647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.445 [2024-10-11 12:03:14.836657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-10-11 12:03:14.846740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.445 [2024-10-11 12:03:14.846836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.445 [2024-10-11 12:03:14.846849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.445 [2024-10-11 12:03:14.846854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.445 [2024-10-11 12:03:14.846858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.445 [2024-10-11 12:03:14.846868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-10-11 12:03:14.856803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.445 [2024-10-11 12:03:14.856868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.445 [2024-10-11 12:03:14.856878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.445 [2024-10-11 12:03:14.856882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.445 [2024-10-11 12:03:14.856887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.445 [2024-10-11 12:03:14.856896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.445 qpair failed and we were unable to recover it. 00:29:30.445 [2024-10-11 12:03:14.866816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.445 [2024-10-11 12:03:14.866863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.445 [2024-10-11 12:03:14.866874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.445 [2024-10-11 12:03:14.866878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.866883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.866893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.877519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.877562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.877571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.877576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.877581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.877591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.886905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.886953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.886962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.886968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.886972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.886985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.896929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.896975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.896985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.896990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.896994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.897004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.906932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.906978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.906987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.906992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.906996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.907006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.916924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.916966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.916977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.916981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.916985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.916996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.926992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.927035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.927045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.927050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.927054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.927064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.937033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.937079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.937096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.937101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.937106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.937116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.947022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.947068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.947078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.947083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.947087] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.947097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.957027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.957107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.957117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.957121] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.957126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.957135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.967116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.967163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.967173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.967178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.967183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.967192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.977146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.977195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.977205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.977210] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.977214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.977228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.987145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.987228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.987238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.987242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.987246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.987256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:14.997123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:14.997166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:14.997175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:14.997180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:14.997184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.446 [2024-10-11 12:03:14.997194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.446 qpair failed and we were unable to recover it. 00:29:30.446 [2024-10-11 12:03:15.007199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.446 [2024-10-11 12:03:15.007246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.446 [2024-10-11 12:03:15.007256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.446 [2024-10-11 12:03:15.007260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.446 [2024-10-11 12:03:15.007265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.447 [2024-10-11 12:03:15.007275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-10-11 12:03:15.017240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.447 [2024-10-11 12:03:15.017290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.447 [2024-10-11 12:03:15.017300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.447 [2024-10-11 12:03:15.017305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.447 [2024-10-11 12:03:15.017309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.447 [2024-10-11 12:03:15.017319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-10-11 12:03:15.027263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.447 [2024-10-11 12:03:15.027312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.447 [2024-10-11 12:03:15.027325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.447 [2024-10-11 12:03:15.027330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.447 [2024-10-11 12:03:15.027334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.447 [2024-10-11 12:03:15.027344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-10-11 12:03:15.037273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.447 [2024-10-11 12:03:15.037351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.447 [2024-10-11 12:03:15.037360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.447 [2024-10-11 12:03:15.037365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.447 [2024-10-11 12:03:15.037369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.447 [2024-10-11 12:03:15.037379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-10-11 12:03:15.047300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.447 [2024-10-11 12:03:15.047357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.447 [2024-10-11 12:03:15.047368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.447 [2024-10-11 12:03:15.047373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.447 [2024-10-11 12:03:15.047377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.447 [2024-10-11 12:03:15.047387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-10-11 12:03:15.057350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.447 [2024-10-11 12:03:15.057407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.447 [2024-10-11 12:03:15.057418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.447 [2024-10-11 12:03:15.057423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.447 [2024-10-11 12:03:15.057427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.447 [2024-10-11 12:03:15.057437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.447 [2024-10-11 12:03:15.067420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.447 [2024-10-11 12:03:15.067479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.447 [2024-10-11 12:03:15.067490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.447 [2024-10-11 12:03:15.067495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.447 [2024-10-11 12:03:15.067502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.447 [2024-10-11 12:03:15.067512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.447 qpair failed and we were unable to recover it. 00:29:30.709 [2024-10-11 12:03:15.077370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.709 [2024-10-11 12:03:15.077417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.709 [2024-10-11 12:03:15.077437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.709 [2024-10-11 12:03:15.077443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.709 [2024-10-11 12:03:15.077448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.709 [2024-10-11 12:03:15.077462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-10-11 12:03:15.087402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.709 [2024-10-11 12:03:15.087458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.709 [2024-10-11 12:03:15.087470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.709 [2024-10-11 12:03:15.087475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.709 [2024-10-11 12:03:15.087479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.709 [2024-10-11 12:03:15.087490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-10-11 12:03:15.097457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.709 [2024-10-11 12:03:15.097510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.709 [2024-10-11 12:03:15.097520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.709 [2024-10-11 12:03:15.097525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.709 [2024-10-11 12:03:15.097529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.709 [2024-10-11 12:03:15.097540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-10-11 12:03:15.107441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.709 [2024-10-11 12:03:15.107483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.709 [2024-10-11 12:03:15.107493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.709 [2024-10-11 12:03:15.107498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.709 [2024-10-11 12:03:15.107502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.709 [2024-10-11 12:03:15.107512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-10-11 12:03:15.117371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.709 [2024-10-11 12:03:15.117421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.709 [2024-10-11 12:03:15.117432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.709 [2024-10-11 12:03:15.117437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.709 [2024-10-11 12:03:15.117441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.709 [2024-10-11 12:03:15.117451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-10-11 12:03:15.127539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.709 [2024-10-11 12:03:15.127586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.709 [2024-10-11 12:03:15.127596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.709 [2024-10-11 12:03:15.127601] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.709 [2024-10-11 12:03:15.127605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.709 [2024-10-11 12:03:15.127615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-10-11 12:03:15.137567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.709 [2024-10-11 12:03:15.137616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.709 [2024-10-11 12:03:15.137626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.709 [2024-10-11 12:03:15.137630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.709 [2024-10-11 12:03:15.137634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.709 [2024-10-11 12:03:15.137644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.709 [2024-10-11 12:03:15.147443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.709 [2024-10-11 12:03:15.147489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.709 [2024-10-11 12:03:15.147499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.709 [2024-10-11 12:03:15.147504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.709 [2024-10-11 12:03:15.147508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.709 [2024-10-11 12:03:15.147518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.709 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.157592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.157635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.157645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.157650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.157657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.157672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.167655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.167712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.167723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.167727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.167732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.167743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.177708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.177757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.177767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.177772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.177776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.177786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.187705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.187749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.187759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.187764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.187768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.187778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.197695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.197755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.197765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.197770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.197775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.197785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.207769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.207821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.207831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.207836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.207840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.207851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.217781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.217832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.217842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.217847] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.217852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.217862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.227761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.227806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.227815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.227820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.227825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.227835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.237797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.237841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.237850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.237855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.237860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.237870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.247740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.247783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.247793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.247798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.247805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.247815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.257826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.257915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.257926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.257930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.257934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.257945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.267917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.267965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.267976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.267981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.267985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.267996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.277894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.277935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.277945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.277949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.277954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.277964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.287834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.287881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.710 [2024-10-11 12:03:15.287891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.710 [2024-10-11 12:03:15.287896] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.710 [2024-10-11 12:03:15.287900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x102dbd0 00:29:30.710 [2024-10-11 12:03:15.287910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.710 qpair failed and we were unable to recover it. 00:29:30.710 [2024-10-11 12:03:15.298070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.710 [2024-10-11 12:03:15.298187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.711 [2024-10-11 12:03:15.298254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.711 [2024-10-11 12:03:15.298279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.711 [2024-10-11 12:03:15.298300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe6b4000b90 00:29:30.711 [2024-10-11 12:03:15.298355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-11 12:03:15.308023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.711 [2024-10-11 12:03:15.308092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.711 [2024-10-11 12:03:15.308122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.711 [2024-10-11 12:03:15.308136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.711 [2024-10-11 12:03:15.308149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe6b4000b90 00:29:30.711 [2024-10-11 12:03:15.308178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-11 12:03:15.318036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.711 [2024-10-11 12:03:15.318143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.711 [2024-10-11 12:03:15.318209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.711 [2024-10-11 12:03:15.318234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.711 [2024-10-11 12:03:15.318254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe6ac000b90 00:29:30.711 [2024-10-11 12:03:15.318307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-11 12:03:15.328091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:30.711 [2024-10-11 12:03:15.328178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:30.711 [2024-10-11 12:03:15.328215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:30.711 [2024-10-11 12:03:15.328234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:30.711 [2024-10-11 12:03:15.328251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe6ac000b90 00:29:30.711 [2024-10-11 12:03:15.328289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:30.711 qpair failed and we were unable to recover it. 00:29:30.711 [2024-10-11 12:03:15.328484] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:30.711 A controller has encountered a failure and is being reset. 00:29:30.711 [2024-10-11 12:03:15.328614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10249a0 (9): Bad file descriptor 00:29:30.971 Controller properly reset. 00:29:30.971 Initializing NVMe Controllers 00:29:30.971 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:30.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:30.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:30.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:30.972 Initialization complete. Launching workers. 00:29:30.972 Starting thread on core 1 00:29:30.972 Starting thread on core 2 00:29:30.972 Starting thread on core 3 00:29:30.972 Starting thread on core 0 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:30.972 00:29:30.972 real 0m11.431s 00:29:30.972 user 0m21.800s 00:29:30.972 sys 0m4.018s 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.972 ************************************ 00:29:30.972 END TEST nvmf_target_disconnect_tc2 00:29:30.972 ************************************ 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.972 rmmod nvme_tcp 00:29:30.972 rmmod nvme_fabrics 00:29:30.972 rmmod nvme_keyring 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1204082 ']' 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1204082 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1204082 ']' 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1204082 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:30.972 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1204082 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1204082' 00:29:31.232 killing process with pid 1204082 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1204082 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1204082 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.232 12:03:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.781 12:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.781 00:29:33.781 real 0m21.788s 00:29:33.781 user 0m49.643s 00:29:33.781 sys 0m10.143s 00:29:33.781 12:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:33.781 12:03:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:33.781 ************************************ 00:29:33.781 END TEST nvmf_target_disconnect 00:29:33.781 ************************************ 00:29:33.781 12:03:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:33.781 00:29:33.781 real 6m25.971s 00:29:33.781 user 11m16.831s 00:29:33.781 sys 2m14.070s 00:29:33.781 12:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:33.781 12:03:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:33.781 ************************************ 00:29:33.781 END TEST nvmf_host 00:29:33.781 ************************************ 00:29:33.781 12:03:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:33.782 12:03:17 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:33.782 12:03:17 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:33.782 12:03:17 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:33.782 12:03:17 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:33.782 12:03:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:33.782 ************************************ 00:29:33.782 START TEST nvmf_target_core_interrupt_mode 00:29:33.782 ************************************ 00:29:33.782 12:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:33.782 * Looking for test storage... 00:29:33.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:33.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.782 --rc genhtml_branch_coverage=1 00:29:33.782 --rc genhtml_function_coverage=1 00:29:33.782 --rc genhtml_legend=1 00:29:33.782 --rc geninfo_all_blocks=1 00:29:33.782 --rc geninfo_unexecuted_blocks=1 00:29:33.782 00:29:33.782 ' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:33.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.782 --rc genhtml_branch_coverage=1 00:29:33.782 --rc genhtml_function_coverage=1 00:29:33.782 --rc genhtml_legend=1 00:29:33.782 --rc geninfo_all_blocks=1 00:29:33.782 --rc geninfo_unexecuted_blocks=1 00:29:33.782 00:29:33.782 ' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:33.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.782 --rc genhtml_branch_coverage=1 00:29:33.782 --rc genhtml_function_coverage=1 00:29:33.782 --rc genhtml_legend=1 00:29:33.782 --rc geninfo_all_blocks=1 00:29:33.782 --rc geninfo_unexecuted_blocks=1 00:29:33.782 00:29:33.782 ' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:33.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.782 --rc genhtml_branch_coverage=1 00:29:33.782 --rc genhtml_function_coverage=1 00:29:33.782 --rc genhtml_legend=1 00:29:33.782 --rc geninfo_all_blocks=1 00:29:33.782 --rc geninfo_unexecuted_blocks=1 00:29:33.782 00:29:33.782 ' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:33.782 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:33.783 ************************************ 00:29:33.783 START TEST nvmf_abort 00:29:33.783 ************************************ 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:33.783 * Looking for test storage... 00:29:33.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:33.783 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.044 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:34.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.045 --rc genhtml_branch_coverage=1 00:29:34.045 --rc genhtml_function_coverage=1 00:29:34.045 --rc genhtml_legend=1 00:29:34.045 --rc geninfo_all_blocks=1 00:29:34.045 --rc geninfo_unexecuted_blocks=1 00:29:34.045 00:29:34.045 ' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:34.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.045 --rc genhtml_branch_coverage=1 00:29:34.045 --rc genhtml_function_coverage=1 00:29:34.045 --rc genhtml_legend=1 00:29:34.045 --rc geninfo_all_blocks=1 00:29:34.045 --rc geninfo_unexecuted_blocks=1 00:29:34.045 00:29:34.045 ' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:34.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.045 --rc genhtml_branch_coverage=1 00:29:34.045 --rc genhtml_function_coverage=1 00:29:34.045 --rc genhtml_legend=1 00:29:34.045 --rc geninfo_all_blocks=1 00:29:34.045 --rc geninfo_unexecuted_blocks=1 00:29:34.045 00:29:34.045 ' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:34.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.045 --rc genhtml_branch_coverage=1 00:29:34.045 --rc genhtml_function_coverage=1 00:29:34.045 --rc genhtml_legend=1 00:29:34.045 --rc geninfo_all_blocks=1 00:29:34.045 --rc geninfo_unexecuted_blocks=1 00:29:34.045 00:29:34.045 ' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.045 12:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.188 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:42.189 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:42.189 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:42.189 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:42.189 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.189 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:29:42.190 00:29:42.190 --- 10.0.0.2 ping statistics --- 00:29:42.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.190 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:29:42.190 00:29:42.190 --- 10.0.0.1 ping statistics --- 00:29:42.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.190 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1209882 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1209882 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1209882 ']' 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:42.190 12:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.190 [2024-10-11 12:03:25.982140] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:42.190 [2024-10-11 12:03:25.983275] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:29:42.190 [2024-10-11 12:03:25.983327] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.190 [2024-10-11 12:03:26.073418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:42.190 [2024-10-11 12:03:26.125442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.190 [2024-10-11 12:03:26.125499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.190 [2024-10-11 12:03:26.125507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.190 [2024-10-11 12:03:26.125514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.190 [2024-10-11 12:03:26.125521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.190 [2024-10-11 12:03:26.127566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.190 [2024-10-11 12:03:26.127731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.190 [2024-10-11 12:03:26.127732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.190 [2024-10-11 12:03:26.206111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:42.190 [2024-10-11 12:03:26.207215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:42.190 [2024-10-11 12:03:26.207431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:42.190 [2024-10-11 12:03:26.207599] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:42.190 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.190 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:29:42.190 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:42.190 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.190 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.452 [2024-10-11 12:03:26.844628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.452 Malloc0 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.452 Delay0 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.452 [2024-10-11 12:03:26.944580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.452 12:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:42.452 [2024-10-11 12:03:27.074540] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:45.000 Initializing NVMe Controllers 00:29:45.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:45.000 controller IO queue size 128 less than required 00:29:45.000 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:45.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:45.000 Initialization complete. Launching workers. 00:29:45.000 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28695 00:29:45.000 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28752, failed to submit 66 00:29:45.000 success 28695, unsuccessful 57, failed 0 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.000 rmmod nvme_tcp 00:29:45.000 rmmod nvme_fabrics 00:29:45.000 rmmod nvme_keyring 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1209882 ']' 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1209882 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1209882 ']' 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1209882 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1209882 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1209882' 00:29:45.000 killing process with pid 1209882 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1209882 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1209882 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.000 12:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.547 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:47.547 00:29:47.547 real 0m13.431s 00:29:47.547 user 0m11.443s 00:29:47.547 sys 0m6.846s 00:29:47.547 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:47.547 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.547 ************************************ 00:29:47.547 END TEST nvmf_abort 00:29:47.547 ************************************ 00:29:47.547 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:47.547 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:47.547 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:47.548 ************************************ 00:29:47.548 START TEST nvmf_ns_hotplug_stress 00:29:47.548 ************************************ 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:47.548 * Looking for test storage... 00:29:47.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:47.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.548 --rc genhtml_branch_coverage=1 00:29:47.548 --rc genhtml_function_coverage=1 00:29:47.548 --rc genhtml_legend=1 00:29:47.548 --rc geninfo_all_blocks=1 00:29:47.548 --rc geninfo_unexecuted_blocks=1 00:29:47.548 00:29:47.548 ' 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:47.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.548 --rc genhtml_branch_coverage=1 00:29:47.548 --rc genhtml_function_coverage=1 00:29:47.548 --rc genhtml_legend=1 00:29:47.548 --rc geninfo_all_blocks=1 00:29:47.548 --rc geninfo_unexecuted_blocks=1 00:29:47.548 00:29:47.548 ' 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:47.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.548 --rc genhtml_branch_coverage=1 00:29:47.548 --rc genhtml_function_coverage=1 00:29:47.548 --rc genhtml_legend=1 00:29:47.548 --rc geninfo_all_blocks=1 00:29:47.548 --rc geninfo_unexecuted_blocks=1 00:29:47.548 00:29:47.548 ' 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:47.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.548 --rc genhtml_branch_coverage=1 00:29:47.548 --rc genhtml_function_coverage=1 00:29:47.548 --rc genhtml_legend=1 00:29:47.548 --rc geninfo_all_blocks=1 00:29:47.548 --rc geninfo_unexecuted_blocks=1 00:29:47.548 00:29:47.548 ' 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.548 12:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:47.548 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:47.549 12:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:55.690 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:55.690 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:55.690 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:55.690 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:55.690 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:55.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:55.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:29:55.691 00:29:55.691 --- 10.0.0.2 ping statistics --- 00:29:55.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.691 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:55.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:55.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:29:55.691 00:29:55.691 --- 10.0.0.1 ping statistics --- 00:29:55.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.691 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1214875 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1214875 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1214875 ']' 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:55.691 12:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:55.691 [2024-10-11 12:03:39.509559] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:55.691 [2024-10-11 12:03:39.510692] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:29:55.691 [2024-10-11 12:03:39.510746] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.691 [2024-10-11 12:03:39.600322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:55.691 [2024-10-11 12:03:39.652372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.691 [2024-10-11 12:03:39.652423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.691 [2024-10-11 12:03:39.652431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:55.691 [2024-10-11 12:03:39.652438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:55.691 [2024-10-11 12:03:39.652444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.691 [2024-10-11 12:03:39.654511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:55.691 [2024-10-11 12:03:39.654672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.691 [2024-10-11 12:03:39.654686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:55.691 [2024-10-11 12:03:39.731150] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:55.691 [2024-10-11 12:03:39.732146] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:55.691 [2024-10-11 12:03:39.732663] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:55.691 [2024-10-11 12:03:39.732819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:55.952 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:55.952 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:29:55.952 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:55.952 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:55.952 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:55.952 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.952 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:55.952 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:55.952 [2024-10-11 12:03:40.535865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.952 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:56.212 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.473 [2024-10-11 12:03:40.948618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.473 12:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:56.734 12:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:56.995 Malloc0 00:29:56.995 12:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:56.995 Delay0 00:29:56.995 12:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.256 12:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:57.516 NULL1 00:29:57.516 12:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:57.516 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:57.516 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1215252 00:29:57.516 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:29:57.516 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.777 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.038 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:58.038 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:58.299 true 00:29:58.299 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:29:58.299 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.561 12:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.561 12:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:58.561 12:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:58.821 true 00:29:58.821 12:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:29:58.821 12:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.208 Read completed with error (sct=0, sc=11) 00:30:00.208 12:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.208 12:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:00.208 12:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:00.469 true 00:30:00.469 12:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:00.469 12:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.469 12:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.730 12:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:00.730 12:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:00.991 true 00:30:00.991 12:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:00.991 12:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.933 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.194 12:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:02.194 12:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:02.194 12:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:02.455 true 00:30:02.455 12:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:02.455 12:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.396 12:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.396 12:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:03.396 12:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:03.658 true 00:30:03.658 12:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:03.658 12:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.918 12:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.918 12:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:03.918 12:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:04.179 true 00:30:04.179 12:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:04.179 12:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.563 12:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:05.564 12:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:05.564 12:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:05.824 true 00:30:05.824 12:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:05.824 12:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.766 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.766 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:06.766 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:07.027 true 00:30:07.027 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:07.027 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.027 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.288 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:07.288 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:07.549 true 00:30:07.549 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:07.549 12:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:08.490 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.752 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:08.752 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:09.012 true 00:30:09.013 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:09.013 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.013 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.273 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:09.273 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:09.534 true 00:30:09.534 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:09.534 12:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.918 12:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:10.918 12:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:10.918 12:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:10.918 true 00:30:10.918 12:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:10.918 12:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.858 12:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:12.119 12:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:12.119 12:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:12.119 true 00:30:12.119 12:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:12.119 12:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.380 12:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.641 12:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:12.641 12:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:12.641 true 00:30:12.641 12:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:12.641 12:03:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.026 12:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.026 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:14.026 12:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:14.026 12:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:14.287 true 00:30:14.287 12:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:14.287 12:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.230 12:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.230 12:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:15.230 12:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:15.491 true 00:30:15.491 12:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:15.491 12:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.491 12:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.752 12:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:15.752 12:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:16.013 true 00:30:16.013 12:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:16.013 12:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.013 12:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.274 12:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:16.274 12:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:16.535 true 00:30:16.535 12:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:16.535 12:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.478 12:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.478 12:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:17.478 12:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:17.739 true 00:30:17.739 12:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:17.739 12:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.000 12:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.000 12:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:18.000 12:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:18.261 true 00:30:18.261 12:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:18.261 12:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.647 12:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:19.647 12:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:19.647 12:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:19.647 true 00:30:19.647 12:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:19.647 12:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.908 12:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.169 12:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:20.169 12:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:20.169 true 00:30:20.169 12:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:20.169 12:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.555 12:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:21.555 12:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:21.555 12:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:21.816 true 00:30:21.816 12:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:21.816 12:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.077 12:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.077 12:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:22.077 12:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:22.343 true 00:30:22.343 12:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:22.343 12:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.605 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.605 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:22.605 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:22.866 true 00:30:22.866 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:22.866 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.136 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.136 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:23.136 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:23.402 true 00:30:23.402 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:23.402 12:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.663 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.663 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:23.663 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:23.925 true 00:30:23.925 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:23.925 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.186 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.186 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:24.186 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:24.447 true 00:30:24.447 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:24.447 12:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.708 12:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.708 12:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:24.708 12:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:24.969 true 00:30:24.969 12:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:24.969 12:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.229 12:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.490 12:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:25.490 12:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:25.490 true 00:30:25.490 12:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:25.490 12:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.751 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.752 12:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:25.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.013 12:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:26.013 12:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:26.013 true 00:30:26.013 12:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:26.013 12:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.956 12:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.223 12:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:27.223 12:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:27.223 true 00:30:27.223 12:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:27.223 12:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.516 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.820 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:27.820 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:27.820 true 00:30:27.820 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:27.820 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.820 Initializing NVMe Controllers 00:30:27.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.820 Controller IO queue size 128, less than required. 00:30:27.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:27.820 Controller IO queue size 128, less than required. 00:30:27.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:27.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:27.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:27.820 Initialization complete. Launching workers. 00:30:27.820 ======================================================== 00:30:27.820 Latency(us) 00:30:27.820 Device Information : IOPS MiB/s Average min max 00:30:27.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1866.48 0.91 36535.60 1527.48 1010815.04 00:30:27.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15973.50 7.80 7986.42 1154.08 400999.60 00:30:27.820 ======================================================== 00:30:27.820 Total : 17839.98 8.71 10973.32 1154.08 1010815.04 00:30:27.820 00:30:28.169 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.169 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:28.169 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:28.445 true 00:30:28.445 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1215252 00:30:28.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1215252) - No such process 00:30:28.445 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1215252 00:30:28.445 12:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.706 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:28.706 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:28.706 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:28.706 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:28.706 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:28.706 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:28.967 null0 00:30:28.967 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:28.967 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:28.967 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:28.967 null1 00:30:29.228 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:29.228 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:29.228 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:29.228 null2 00:30:29.228 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:29.228 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:29.228 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:29.489 null3 00:30:29.489 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:29.489 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:29.489 12:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:29.489 null4 00:30:29.751 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:29.751 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:29.751 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:29.751 null5 00:30:29.751 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:29.751 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:29.751 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:30.012 null6 00:30:30.012 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:30.012 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:30.012 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:30.274 null7 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:30.274 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1221551 1221553 1221556 1221559 1221561 1221564 1221567 1221569 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:30.275 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:30.537 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:30.537 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:30.537 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:30.537 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:30.537 12:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.537 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:30.798 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:30.799 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.061 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:31.062 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.324 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:31.585 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.585 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:31.585 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:31.585 12:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:31.585 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:31.585 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:31.585 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:31.585 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:31.585 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.585 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.585 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.586 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:31.847 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.109 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.372 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.373 12:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.635 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.897 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:33.159 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.160 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.422 12:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:33.422 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:33.423 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:33.684 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:33.684 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:33.684 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:33.684 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:33.685 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:33.685 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:33.685 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.685 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:34.056 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:34.057 rmmod nvme_tcp 00:30:34.057 rmmod nvme_fabrics 00:30:34.057 rmmod nvme_keyring 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1214875 ']' 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1214875 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1214875 ']' 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1214875 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1214875 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1214875' 00:30:34.057 killing process with pid 1214875 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1214875 00:30:34.057 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1214875 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.318 12:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.232 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.232 00:30:36.232 real 0m49.001s 00:30:36.232 user 2m56.275s 00:30:36.232 sys 0m21.073s 00:30:36.232 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:36.232 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 ************************************ 00:30:36.232 END TEST nvmf_ns_hotplug_stress 00:30:36.232 ************************************ 00:30:36.232 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:36.232 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:36.232 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:36.232 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.232 ************************************ 00:30:36.232 START TEST nvmf_delete_subsystem 00:30:36.232 ************************************ 00:30:36.232 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:36.495 * Looking for test storage... 00:30:36.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.495 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:36.495 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:36.495 12:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:36.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.495 --rc genhtml_branch_coverage=1 00:30:36.495 --rc genhtml_function_coverage=1 00:30:36.495 --rc genhtml_legend=1 00:30:36.495 --rc geninfo_all_blocks=1 00:30:36.495 --rc geninfo_unexecuted_blocks=1 00:30:36.495 00:30:36.495 ' 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:36.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.495 --rc genhtml_branch_coverage=1 00:30:36.495 --rc genhtml_function_coverage=1 00:30:36.495 --rc genhtml_legend=1 00:30:36.495 --rc geninfo_all_blocks=1 00:30:36.495 --rc geninfo_unexecuted_blocks=1 00:30:36.495 00:30:36.495 ' 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:36.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.495 --rc genhtml_branch_coverage=1 00:30:36.495 --rc genhtml_function_coverage=1 00:30:36.495 --rc genhtml_legend=1 00:30:36.495 --rc geninfo_all_blocks=1 00:30:36.495 --rc geninfo_unexecuted_blocks=1 00:30:36.495 00:30:36.495 ' 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:36.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.495 --rc genhtml_branch_coverage=1 00:30:36.495 --rc genhtml_function_coverage=1 00:30:36.495 --rc genhtml_legend=1 00:30:36.495 --rc geninfo_all_blocks=1 00:30:36.495 --rc geninfo_unexecuted_blocks=1 00:30:36.495 00:30:36.495 ' 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.495 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.496 12:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.642 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:44.643 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:44.643 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:44.643 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:44.643 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:30:44.643 00:30:44.643 --- 10.0.0.2 ping statistics --- 00:30:44.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.643 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:30:44.643 00:30:44.643 --- 10.0.0.1 ping statistics --- 00:30:44.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.643 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1226528 00:30:44.643 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1226528 00:30:44.644 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:44.644 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1226528 ']' 00:30:44.644 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.644 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:44.644 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.644 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:44.644 12:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.644 [2024-10-11 12:04:28.554359] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:44.644 [2024-10-11 12:04:28.555500] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:30:44.644 [2024-10-11 12:04:28.555552] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.644 [2024-10-11 12:04:28.643325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:44.644 [2024-10-11 12:04:28.696092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.644 [2024-10-11 12:04:28.696146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.644 [2024-10-11 12:04:28.696155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.644 [2024-10-11 12:04:28.696162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.644 [2024-10-11 12:04:28.696169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.644 [2024-10-11 12:04:28.697738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.644 [2024-10-11 12:04:28.697781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.644 [2024-10-11 12:04:28.774311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:44.644 [2024-10-11 12:04:28.774892] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:44.644 [2024-10-11 12:04:28.775212] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.906 [2024-10-11 12:04:29.418780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.906 [2024-10-11 12:04:29.451457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.906 NULL1 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.906 Delay0 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1226874 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:44.906 12:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:45.168 [2024-10-11 12:04:29.562306] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:47.084 12:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.084 12:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.084 12:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 [2024-10-11 12:04:31.645762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f2520 is same with the state(6) to be set 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Write completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 starting I/O failed: -6 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.084 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 starting I/O failed: -6 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 starting I/O failed: -6 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 starting I/O failed: -6 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 starting I/O failed: -6 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 starting I/O failed: -6 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 starting I/O failed: -6 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 starting I/O failed: -6 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 starting I/O failed: -6 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 starting I/O failed: -6 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 [2024-10-11 12:04:31.650007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc83000d310 is same with the state(6) to be set 00:30:47.085 starting I/O failed: -6 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Write completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:47.085 Read completed with error (sct=0, sc=8) 00:30:48.027 [2024-10-11 12:04:32.622346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f30a0 is same with the state(6) to be set 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 [2024-10-11 12:04:32.649106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f09e0 is same with the state(6) to be set 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 [2024-10-11 12:04:32.649624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0da0 is same with the state(6) to be set 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 [2024-10-11 12:04:32.650810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc83000cfe0 is same with the state(6) to be set 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 Read completed with error (sct=0, sc=8) 00:30:48.027 Write completed with error (sct=0, sc=8) 00:30:48.027 [2024-10-11 12:04:32.652452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc83000d640 is same with the state(6) to be set 00:30:48.027 Initializing NVMe Controllers 00:30:48.027 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.027 Controller IO queue size 128, less than required. 00:30:48.027 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:48.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:48.027 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:48.027 Initialization complete. Launching workers. 00:30:48.027 ======================================================== 00:30:48.027 Latency(us) 00:30:48.027 Device Information : IOPS MiB/s Average min max 00:30:48.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.15 0.08 891068.29 378.37 1007837.27 00:30:48.027 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.18 0.08 906047.42 190.00 1011737.58 00:30:48.027 ======================================================== 00:30:48.027 Total : 338.33 0.17 898425.69 190.00 1011737.58 00:30:48.027 00:30:48.027 [2024-10-11 12:04:32.652892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f30a0 (9): Bad file descriptor 00:30:48.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:48.027 12:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.027 12:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:48.027 12:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1226874 00:30:48.027 12:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1226874 00:30:48.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1226874) - No such process 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1226874 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1226874 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1226874 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:48.598 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:48.599 [2024-10-11 12:04:33.187396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1227547 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227547 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:48.599 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:48.859 [2024-10-11 12:04:33.272725] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:49.120 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:49.120 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227547 00:30:49.120 12:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:49.692 12:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:49.692 12:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227547 00:30:49.692 12:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:50.262 12:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:50.262 12:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227547 00:30:50.262 12:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:50.833 12:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:50.833 12:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227547 00:30:50.833 12:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:51.405 12:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:51.405 12:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227547 00:30:51.405 12:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:51.666 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:51.666 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227547 00:30:51.666 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:51.927 Initializing NVMe Controllers 00:30:51.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.927 Controller IO queue size 128, less than required. 00:30:51.927 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:51.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:51.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:51.927 Initialization complete. Launching workers. 00:30:51.927 ======================================================== 00:30:51.927 Latency(us) 00:30:51.927 Device Information : IOPS MiB/s Average min max 00:30:51.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002656.04 1000215.42 1007557.58 00:30:51.927 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003992.86 1000323.14 1010501.12 00:30:51.927 ======================================================== 00:30:51.927 Total : 256.00 0.12 1003324.45 1000215.42 1010501.12 00:30:51.927 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1227547 00:30:52.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1227547) - No such process 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1227547 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.188 rmmod nvme_tcp 00:30:52.188 rmmod nvme_fabrics 00:30:52.188 rmmod nvme_keyring 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1226528 ']' 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1226528 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1226528 ']' 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1226528 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:52.188 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1226528 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1226528' 00:30:52.448 killing process with pid 1226528 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1226528 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1226528 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.448 12:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:54.995 00:30:54.995 real 0m18.195s 00:30:54.995 user 0m26.347s 00:30:54.995 sys 0m7.433s 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:54.995 ************************************ 00:30:54.995 END TEST nvmf_delete_subsystem 00:30:54.995 ************************************ 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:54.995 ************************************ 00:30:54.995 START TEST nvmf_host_management 00:30:54.995 ************************************ 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:54.995 * Looking for test storage... 00:30:54.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:54.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.995 --rc genhtml_branch_coverage=1 00:30:54.995 --rc genhtml_function_coverage=1 00:30:54.995 --rc genhtml_legend=1 00:30:54.995 --rc geninfo_all_blocks=1 00:30:54.995 --rc geninfo_unexecuted_blocks=1 00:30:54.995 00:30:54.995 ' 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:54.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.995 --rc genhtml_branch_coverage=1 00:30:54.995 --rc genhtml_function_coverage=1 00:30:54.995 --rc genhtml_legend=1 00:30:54.995 --rc geninfo_all_blocks=1 00:30:54.995 --rc geninfo_unexecuted_blocks=1 00:30:54.995 00:30:54.995 ' 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:54.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.995 --rc genhtml_branch_coverage=1 00:30:54.995 --rc genhtml_function_coverage=1 00:30:54.995 --rc genhtml_legend=1 00:30:54.995 --rc geninfo_all_blocks=1 00:30:54.995 --rc geninfo_unexecuted_blocks=1 00:30:54.995 00:30:54.995 ' 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:54.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.995 --rc genhtml_branch_coverage=1 00:30:54.995 --rc genhtml_function_coverage=1 00:30:54.995 --rc genhtml_legend=1 00:30:54.995 --rc geninfo_all_blocks=1 00:30:54.995 --rc geninfo_unexecuted_blocks=1 00:30:54.995 00:30:54.995 ' 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.995 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.996 12:04:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:03.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:03.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.138 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:03.139 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:03.139 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:03.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:03.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:31:03.139 00:31:03.139 --- 10.0.0.2 ping statistics --- 00:31:03.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.139 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:03.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:03.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:31:03.139 00:31:03.139 --- 10.0.0.1 ping statistics --- 00:31:03.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.139 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1232253 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1232253 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1232253 ']' 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:03.139 12:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.139 [2024-10-11 12:04:46.876128] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:03.139 [2024-10-11 12:04:46.877258] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:03.139 [2024-10-11 12:04:46.877306] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.139 [2024-10-11 12:04:46.968871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:03.139 [2024-10-11 12:04:47.022476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.139 [2024-10-11 12:04:47.022533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.139 [2024-10-11 12:04:47.022542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.139 [2024-10-11 12:04:47.022549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.139 [2024-10-11 12:04:47.022555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.139 [2024-10-11 12:04:47.024994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:03.139 [2024-10-11 12:04:47.025155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:03.139 [2024-10-11 12:04:47.025315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:03.139 [2024-10-11 12:04:47.025315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.139 [2024-10-11 12:04:47.102556] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:03.139 [2024-10-11 12:04:47.103414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:03.140 [2024-10-11 12:04:47.103711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:03.140 [2024-10-11 12:04:47.104159] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:03.140 [2024-10-11 12:04:47.104207] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:03.140 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:03.140 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:31:03.140 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:03.140 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:03.140 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.140 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.140 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:03.140 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.140 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.140 [2024-10-11 12:04:47.758304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.401 Malloc0 00:31:03.401 [2024-10-11 12:04:47.858624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1232603 00:31:03.401 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1232603 /var/tmp/bdevperf.sock 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1232603 ']' 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:03.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:03.402 { 00:31:03.402 "params": { 00:31:03.402 "name": "Nvme$subsystem", 00:31:03.402 "trtype": "$TEST_TRANSPORT", 00:31:03.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.402 "adrfam": "ipv4", 00:31:03.402 "trsvcid": "$NVMF_PORT", 00:31:03.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.402 "hdgst": ${hdgst:-false}, 00:31:03.402 "ddgst": ${ddgst:-false} 00:31:03.402 }, 00:31:03.402 "method": "bdev_nvme_attach_controller" 00:31:03.402 } 00:31:03.402 EOF 00:31:03.402 )") 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:31:03.402 12:04:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:03.402 "params": { 00:31:03.402 "name": "Nvme0", 00:31:03.402 "trtype": "tcp", 00:31:03.402 "traddr": "10.0.0.2", 00:31:03.402 "adrfam": "ipv4", 00:31:03.402 "trsvcid": "4420", 00:31:03.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.402 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.402 "hdgst": false, 00:31:03.402 "ddgst": false 00:31:03.402 }, 00:31:03.402 "method": "bdev_nvme_attach_controller" 00:31:03.402 }' 00:31:03.402 [2024-10-11 12:04:47.969122] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:03.402 [2024-10-11 12:04:47.969195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232603 ] 00:31:03.663 [2024-10-11 12:04:48.054265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.663 [2024-10-11 12:04:48.107427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.924 Running I/O for 10 seconds... 00:31:04.185 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:04.185 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:31:04.185 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:04.185 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.185 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.449 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:04.449 [2024-10-11 12:04:48.883231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.449 [2024-10-11 12:04:48.883700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259a7a0 is same with the state(6) to be set 00:31:04.450 [2024-10-11 12:04:48.883886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.883945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.883972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.883984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.450 [2024-10-11 12:04:48.884616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.450 [2024-10-11 12:04:48.884626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.884982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.884992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.885010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.885026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.885043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.885061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.885077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.885096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:04.451 [2024-10-11 12:04:48.885113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5f7b0 is same with the state(6) to be set 00:31:04.451 [2024-10-11 12:04:48.885192] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc5f7b0 was disconnected and freed. reset controller. 00:31:04.451 [2024-10-11 12:04:48.885258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.451 [2024-10-11 12:04:48.885269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.451 [2024-10-11 12:04:48.885287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.451 [2024-10-11 12:04:48.885303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:04.451 [2024-10-11 12:04:48.885320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.885327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa46500 is same with the state(6) to be set 00:31:04.451 [2024-10-11 12:04:48.886575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:04.451 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.451 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:04.451 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.451 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:04.451 task offset: 98304 on job bdev=Nvme0n1 fails 00:31:04.451 00:31:04.451 Latency(us) 00:31:04.451 [2024-10-11T10:04:49.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.451 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:04.451 Job: Nvme0n1 ended in about 0.57 seconds with error 00:31:04.451 Verification LBA range: start 0x0 length 0x400 00:31:04.451 Nvme0n1 : 0.57 1343.64 83.98 111.97 0.00 42910.21 4587.52 38010.88 00:31:04.451 [2024-10-11T10:04:49.083Z] =================================================================================================================== 00:31:04.451 [2024-10-11T10:04:49.083Z] Total : 1343.64 83.98 111.97 0.00 42910.21 4587.52 38010.88 00:31:04.451 [2024-10-11 12:04:48.888810] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:04.451 [2024-10-11 12:04:48.888849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa46500 (9): Bad file descriptor 00:31:04.451 [2024-10-11 12:04:48.890515] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:04.451 [2024-10-11 12:04:48.890605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:04.451 [2024-10-11 12:04:48.890633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:04.451 [2024-10-11 12:04:48.890649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:04.451 [2024-10-11 12:04:48.890658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:04.451 [2024-10-11 12:04:48.890677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:04.451 [2024-10-11 12:04:48.890685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa46500 00:31:04.451 [2024-10-11 12:04:48.890708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa46500 (9): Bad file descriptor 00:31:04.452 [2024-10-11 12:04:48.890721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:04.452 [2024-10-11 12:04:48.890730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:04.452 [2024-10-11 12:04:48.890740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:04.452 [2024-10-11 12:04:48.890756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.452 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.452 12:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1232603 00:31:05.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1232603) - No such process 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:05.395 { 00:31:05.395 "params": { 00:31:05.395 "name": "Nvme$subsystem", 00:31:05.395 "trtype": "$TEST_TRANSPORT", 00:31:05.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.395 "adrfam": "ipv4", 00:31:05.395 "trsvcid": "$NVMF_PORT", 00:31:05.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.395 "hdgst": ${hdgst:-false}, 00:31:05.395 "ddgst": ${ddgst:-false} 00:31:05.395 }, 00:31:05.395 "method": "bdev_nvme_attach_controller" 00:31:05.395 } 00:31:05.395 EOF 00:31:05.395 )") 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:31:05.395 12:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:05.395 "params": { 00:31:05.395 "name": "Nvme0", 00:31:05.395 "trtype": "tcp", 00:31:05.395 "traddr": "10.0.0.2", 00:31:05.395 "adrfam": "ipv4", 00:31:05.395 "trsvcid": "4420", 00:31:05.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:05.395 "hdgst": false, 00:31:05.395 "ddgst": false 00:31:05.395 }, 00:31:05.395 "method": "bdev_nvme_attach_controller" 00:31:05.395 }' 00:31:05.395 [2024-10-11 12:04:49.963165] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:05.395 [2024-10-11 12:04:49.963238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232957 ] 00:31:05.655 [2024-10-11 12:04:50.044741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.655 [2024-10-11 12:04:50.082662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.655 Running I/O for 1 seconds... 00:31:07.046 1747.00 IOPS, 109.19 MiB/s 00:31:07.046 Latency(us) 00:31:07.046 [2024-10-11T10:04:51.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.046 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:07.046 Verification LBA range: start 0x0 length 0x400 00:31:07.046 Nvme0n1 : 1.01 1784.24 111.51 0.00 0.00 35098.40 3822.93 36918.61 00:31:07.046 [2024-10-11T10:04:51.678Z] =================================================================================================================== 00:31:07.046 [2024-10-11T10:04:51.678Z] Total : 1784.24 111.51 0.00 0.00 35098.40 3822.93 36918.61 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.046 rmmod nvme_tcp 00:31:07.046 rmmod nvme_fabrics 00:31:07.046 rmmod nvme_keyring 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1232253 ']' 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1232253 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1232253 ']' 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1232253 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1232253 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1232253' 00:31:07.046 killing process with pid 1232253 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1232253 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1232253 00:31:07.046 [2024-10-11 12:04:51.642588] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:07.046 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:31:07.307 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.307 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.307 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.307 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.307 12:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.221 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.221 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:09.221 00:31:09.221 real 0m14.625s 00:31:09.221 user 0m19.200s 00:31:09.221 sys 0m7.444s 00:31:09.221 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:09.221 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:09.221 ************************************ 00:31:09.221 END TEST nvmf_host_management 00:31:09.221 ************************************ 00:31:09.222 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:09.222 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:09.222 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:09.222 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:09.222 ************************************ 00:31:09.222 START TEST nvmf_lvol 00:31:09.222 ************************************ 00:31:09.222 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:09.483 * Looking for test storage... 00:31:09.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.483 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:09.483 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:09.483 12:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:09.483 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:09.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.484 --rc genhtml_branch_coverage=1 00:31:09.484 --rc genhtml_function_coverage=1 00:31:09.484 --rc genhtml_legend=1 00:31:09.484 --rc geninfo_all_blocks=1 00:31:09.484 --rc geninfo_unexecuted_blocks=1 00:31:09.484 00:31:09.484 ' 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:09.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.484 --rc genhtml_branch_coverage=1 00:31:09.484 --rc genhtml_function_coverage=1 00:31:09.484 --rc genhtml_legend=1 00:31:09.484 --rc geninfo_all_blocks=1 00:31:09.484 --rc geninfo_unexecuted_blocks=1 00:31:09.484 00:31:09.484 ' 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:09.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.484 --rc genhtml_branch_coverage=1 00:31:09.484 --rc genhtml_function_coverage=1 00:31:09.484 --rc genhtml_legend=1 00:31:09.484 --rc geninfo_all_blocks=1 00:31:09.484 --rc geninfo_unexecuted_blocks=1 00:31:09.484 00:31:09.484 ' 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:09.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.484 --rc genhtml_branch_coverage=1 00:31:09.484 --rc genhtml_function_coverage=1 00:31:09.484 --rc genhtml_legend=1 00:31:09.484 --rc geninfo_all_blocks=1 00:31:09.484 --rc geninfo_unexecuted_blocks=1 00:31:09.484 00:31:09.484 ' 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.484 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.485 12:04:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.627 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:17.628 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:17.628 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:17.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:17.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.628 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:31:17.629 00:31:17.629 --- 10.0.0.2 ping statistics --- 00:31:17.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.629 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:31:17.629 00:31:17.629 --- 10.0.0.1 ping statistics --- 00:31:17.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.629 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1237386 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1237386 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1237386 ']' 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:17.629 12:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:17.629 [2024-10-11 12:05:01.649444] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.629 [2024-10-11 12:05:01.650584] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:17.629 [2024-10-11 12:05:01.650635] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.629 [2024-10-11 12:05:01.740302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:17.629 [2024-10-11 12:05:01.794228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.629 [2024-10-11 12:05:01.794280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.629 [2024-10-11 12:05:01.794288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.629 [2024-10-11 12:05:01.794295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.629 [2024-10-11 12:05:01.794302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.629 [2024-10-11 12:05:01.796391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.629 [2024-10-11 12:05:01.796548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.629 [2024-10-11 12:05:01.796549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.629 [2024-10-11 12:05:01.872659] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.629 [2024-10-11 12:05:01.873692] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:17.629 [2024-10-11 12:05:01.874114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:17.629 [2024-10-11 12:05:01.874240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:17.890 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:17.890 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:31:17.890 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:17.890 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:17.890 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:17.890 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.890 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:18.150 [2024-10-11 12:05:02.661430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.150 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:18.412 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:18.412 12:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:18.673 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:18.673 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:18.673 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:18.934 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cbd20289-3199-4fea-b4bc-ccd5e9a929f4 00:31:18.934 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cbd20289-3199-4fea-b4bc-ccd5e9a929f4 lvol 20 00:31:19.195 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=13a4180a-0683-4616-8849-475cd0af9227 00:31:19.195 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:19.455 12:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 13a4180a-0683-4616-8849-475cd0af9227 00:31:19.455 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:19.716 [2024-10-11 12:05:04.217430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.716 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:19.977 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1237994 00:31:19.977 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:19.977 12:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:20.920 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 13a4180a-0683-4616-8849-475cd0af9227 MY_SNAPSHOT 00:31:21.181 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ea28859e-6e9c-414c-93eb-947fb3e22805 00:31:21.181 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 13a4180a-0683-4616-8849-475cd0af9227 30 00:31:21.444 12:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ea28859e-6e9c-414c-93eb-947fb3e22805 MY_CLONE 00:31:21.707 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=95e240a4-47ba-46b4-ad75-62f6e2ffd578 00:31:21.707 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 95e240a4-47ba-46b4-ad75-62f6e2ffd578 00:31:22.278 12:05:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1237994 00:31:30.415 Initializing NVMe Controllers 00:31:30.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:30.415 Controller IO queue size 128, less than required. 00:31:30.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:30.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:30.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:30.415 Initialization complete. Launching workers. 00:31:30.415 ======================================================== 00:31:30.415 Latency(us) 00:31:30.415 Device Information : IOPS MiB/s Average min max 00:31:30.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14990.50 58.56 8538.73 1900.04 71801.91 00:31:30.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16577.30 64.76 7722.80 589.73 62517.59 00:31:30.415 ======================================================== 00:31:30.415 Total : 31567.79 123.31 8110.26 589.73 71801.91 00:31:30.415 00:31:30.415 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.415 12:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 13a4180a-0683-4616-8849-475cd0af9227 00:31:30.675 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cbd20289-3199-4fea-b4bc-ccd5e9a929f4 00:31:30.935 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:30.935 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:30.935 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:30.935 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:30.935 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:30.935 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.935 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:30.935 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.935 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.935 rmmod nvme_tcp 00:31:30.935 rmmod nvme_fabrics 00:31:30.935 rmmod nvme_keyring 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1237386 ']' 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1237386 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1237386 ']' 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1237386 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1237386 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1237386' 00:31:30.936 killing process with pid 1237386 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1237386 00:31:30.936 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1237386 00:31:31.196 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:31.196 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:31.196 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:31.197 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:31.197 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:31:31.197 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:31.197 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:31:31.197 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.197 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.197 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.197 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.197 12:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.108 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.108 00:31:33.108 real 0m23.861s 00:31:33.108 user 0m55.874s 00:31:33.108 sys 0m10.856s 00:31:33.108 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:33.108 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:33.108 ************************************ 00:31:33.108 END TEST nvmf_lvol 00:31:33.108 ************************************ 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:33.369 ************************************ 00:31:33.369 START TEST nvmf_lvs_grow 00:31:33.369 ************************************ 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:33.369 * Looking for test storage... 00:31:33.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.369 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:33.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.370 --rc genhtml_branch_coverage=1 00:31:33.370 --rc genhtml_function_coverage=1 00:31:33.370 --rc genhtml_legend=1 00:31:33.370 --rc geninfo_all_blocks=1 00:31:33.370 --rc geninfo_unexecuted_blocks=1 00:31:33.370 00:31:33.370 ' 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:33.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.370 --rc genhtml_branch_coverage=1 00:31:33.370 --rc genhtml_function_coverage=1 00:31:33.370 --rc genhtml_legend=1 00:31:33.370 --rc geninfo_all_blocks=1 00:31:33.370 --rc geninfo_unexecuted_blocks=1 00:31:33.370 00:31:33.370 ' 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:33.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.370 --rc genhtml_branch_coverage=1 00:31:33.370 --rc genhtml_function_coverage=1 00:31:33.370 --rc genhtml_legend=1 00:31:33.370 --rc geninfo_all_blocks=1 00:31:33.370 --rc geninfo_unexecuted_blocks=1 00:31:33.370 00:31:33.370 ' 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:33.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.370 --rc genhtml_branch_coverage=1 00:31:33.370 --rc genhtml_function_coverage=1 00:31:33.370 --rc genhtml_legend=1 00:31:33.370 --rc geninfo_all_blocks=1 00:31:33.370 --rc geninfo_unexecuted_blocks=1 00:31:33.370 00:31:33.370 ' 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.370 12:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.631 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.632 12:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:41.777 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.777 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:41.778 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:41.778 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:41.778 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:41.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:31:41.778 00:31:41.778 --- 10.0.0.2 ping statistics --- 00:31:41.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.778 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:31:41.778 00:31:41.778 --- 10.0.0.1 ping statistics --- 00:31:41.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.778 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1244235 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1244235 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1244235 ']' 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:41.778 12:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:41.778 [2024-10-11 12:05:25.557164] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:41.778 [2024-10-11 12:05:25.558302] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:41.778 [2024-10-11 12:05:25.558352] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.778 [2024-10-11 12:05:25.648488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.778 [2024-10-11 12:05:25.699847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.778 [2024-10-11 12:05:25.699902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.778 [2024-10-11 12:05:25.699910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:41.778 [2024-10-11 12:05:25.699917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:41.778 [2024-10-11 12:05:25.699924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.778 [2024-10-11 12:05:25.700661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.778 [2024-10-11 12:05:25.776429] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:41.778 [2024-10-11 12:05:25.776730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:41.778 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:41.778 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:31:41.778 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:41.778 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:41.779 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:42.040 [2024-10-11 12:05:26.593519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:42.040 ************************************ 00:31:42.040 START TEST lvs_grow_clean 00:31:42.040 ************************************ 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:42.040 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:42.301 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:42.301 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:42.301 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:42.301 12:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:42.563 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:42.563 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:42.563 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:42.825 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:42.825 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:42.825 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 192b5de5-8b25-452c-84b0-1324c4e1973f lvol 150 00:31:43.086 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f9b4e742-b847-4ef7-a040-e02f0b8ebbb7 00:31:43.086 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:43.086 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:43.086 [2024-10-11 12:05:27.653224] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:43.086 [2024-10-11 12:05:27.653395] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:43.086 true 00:31:43.086 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:43.086 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:43.347 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:43.347 12:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:43.608 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9b4e742-b847-4ef7-a040-e02f0b8ebbb7 00:31:43.609 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:43.869 [2024-10-11 12:05:28.345894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.869 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1244712 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1244712 /var/tmp/bdevperf.sock 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1244712 ']' 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:44.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:44.130 12:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:44.130 [2024-10-11 12:05:28.593642] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:44.130 [2024-10-11 12:05:28.593728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244712 ] 00:31:44.130 [2024-10-11 12:05:28.676465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.130 [2024-10-11 12:05:28.728712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.075 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:45.075 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:31:45.075 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:45.075 Nvme0n1 00:31:45.075 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:45.336 [ 00:31:45.336 { 00:31:45.336 "name": "Nvme0n1", 00:31:45.336 "aliases": [ 00:31:45.336 "f9b4e742-b847-4ef7-a040-e02f0b8ebbb7" 00:31:45.336 ], 00:31:45.336 "product_name": "NVMe disk", 00:31:45.336 "block_size": 4096, 00:31:45.336 "num_blocks": 38912, 00:31:45.336 "uuid": "f9b4e742-b847-4ef7-a040-e02f0b8ebbb7", 00:31:45.336 "numa_id": 0, 00:31:45.336 "assigned_rate_limits": { 00:31:45.336 "rw_ios_per_sec": 0, 00:31:45.336 "rw_mbytes_per_sec": 0, 00:31:45.336 "r_mbytes_per_sec": 0, 00:31:45.336 "w_mbytes_per_sec": 0 00:31:45.336 }, 00:31:45.336 "claimed": false, 00:31:45.336 "zoned": false, 00:31:45.336 "supported_io_types": { 00:31:45.336 "read": true, 00:31:45.336 "write": true, 00:31:45.336 "unmap": true, 00:31:45.336 "flush": true, 00:31:45.336 "reset": true, 00:31:45.336 "nvme_admin": true, 00:31:45.336 "nvme_io": true, 00:31:45.336 "nvme_io_md": false, 00:31:45.336 "write_zeroes": true, 00:31:45.336 "zcopy": false, 00:31:45.336 "get_zone_info": false, 00:31:45.336 "zone_management": false, 00:31:45.336 "zone_append": false, 00:31:45.336 "compare": true, 00:31:45.336 "compare_and_write": true, 00:31:45.336 "abort": true, 00:31:45.336 "seek_hole": false, 00:31:45.336 "seek_data": false, 00:31:45.336 "copy": true, 00:31:45.337 "nvme_iov_md": false 00:31:45.337 }, 00:31:45.337 "memory_domains": [ 00:31:45.337 { 00:31:45.337 "dma_device_id": "system", 00:31:45.337 "dma_device_type": 1 00:31:45.337 } 00:31:45.337 ], 00:31:45.337 "driver_specific": { 00:31:45.337 "nvme": [ 00:31:45.337 { 00:31:45.337 "trid": { 00:31:45.337 "trtype": "TCP", 00:31:45.337 "adrfam": "IPv4", 00:31:45.337 "traddr": "10.0.0.2", 00:31:45.337 "trsvcid": "4420", 00:31:45.337 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:45.337 }, 00:31:45.337 "ctrlr_data": { 00:31:45.337 "cntlid": 1, 00:31:45.337 "vendor_id": "0x8086", 00:31:45.337 "model_number": "SPDK bdev Controller", 00:31:45.337 "serial_number": "SPDK0", 00:31:45.337 "firmware_revision": "25.01", 00:31:45.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.337 "oacs": { 00:31:45.337 "security": 0, 00:31:45.337 "format": 0, 00:31:45.337 "firmware": 0, 00:31:45.337 "ns_manage": 0 00:31:45.337 }, 00:31:45.337 "multi_ctrlr": true, 00:31:45.337 "ana_reporting": false 00:31:45.337 }, 00:31:45.337 "vs": { 00:31:45.337 "nvme_version": "1.3" 00:31:45.337 }, 00:31:45.337 "ns_data": { 00:31:45.337 "id": 1, 00:31:45.337 "can_share": true 00:31:45.337 } 00:31:45.337 } 00:31:45.337 ], 00:31:45.337 "mp_policy": "active_passive" 00:31:45.337 } 00:31:45.337 } 00:31:45.337 ] 00:31:45.337 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1245042 00:31:45.337 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:45.337 12:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:45.337 Running I/O for 10 seconds... 00:31:46.725 Latency(us) 00:31:46.725 [2024-10-11T10:05:31.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.725 Nvme0n1 : 1.00 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:31:46.725 [2024-10-11T10:05:31.357Z] =================================================================================================================== 00:31:46.725 [2024-10-11T10:05:31.357Z] Total : 16510.00 64.49 0.00 0.00 0.00 0.00 0.00 00:31:46.725 00:31:47.297 12:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:47.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:47.558 Nvme0n1 : 2.00 16830.50 65.74 0.00 0.00 0.00 0.00 0.00 00:31:47.558 [2024-10-11T10:05:32.190Z] =================================================================================================================== 00:31:47.558 [2024-10-11T10:05:32.190Z] Total : 16830.50 65.74 0.00 0.00 0.00 0.00 0.00 00:31:47.558 00:31:47.558 true 00:31:47.559 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:47.559 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:47.820 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:47.820 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:47.820 12:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1245042 00:31:48.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.392 Nvme0n1 : 3.00 17023.00 66.50 0.00 0.00 0.00 0.00 0.00 00:31:48.392 [2024-10-11T10:05:33.024Z] =================================================================================================================== 00:31:48.392 [2024-10-11T10:05:33.024Z] Total : 17023.00 66.50 0.00 0.00 0.00 0.00 0.00 00:31:48.392 00:31:49.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.333 Nvme0n1 : 4.00 17263.25 67.43 0.00 0.00 0.00 0.00 0.00 00:31:49.333 [2024-10-11T10:05:33.965Z] =================================================================================================================== 00:31:49.333 [2024-10-11T10:05:33.965Z] Total : 17263.25 67.43 0.00 0.00 0.00 0.00 0.00 00:31:49.333 00:31:50.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.719 Nvme0n1 : 5.00 18703.60 73.06 0.00 0.00 0.00 0.00 0.00 00:31:50.719 [2024-10-11T10:05:35.351Z] =================================================================================================================== 00:31:50.719 [2024-10-11T10:05:35.351Z] Total : 18703.60 73.06 0.00 0.00 0.00 0.00 0.00 00:31:50.719 00:31:51.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.728 Nvme0n1 : 6.00 19818.17 77.41 0.00 0.00 0.00 0.00 0.00 00:31:51.728 [2024-10-11T10:05:36.360Z] =================================================================================================================== 00:31:51.728 [2024-10-11T10:05:36.360Z] Total : 19818.17 77.41 0.00 0.00 0.00 0.00 0.00 00:31:51.728 00:31:52.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.667 Nvme0n1 : 7.00 20616.57 80.53 0.00 0.00 0.00 0.00 0.00 00:31:52.667 [2024-10-11T10:05:37.299Z] =================================================================================================================== 00:31:52.667 [2024-10-11T10:05:37.299Z] Total : 20616.57 80.53 0.00 0.00 0.00 0.00 0.00 00:31:52.667 00:31:53.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.608 Nvme0n1 : 8.00 21215.50 82.87 0.00 0.00 0.00 0.00 0.00 00:31:53.608 [2024-10-11T10:05:38.240Z] =================================================================================================================== 00:31:53.608 [2024-10-11T10:05:38.240Z] Total : 21215.50 82.87 0.00 0.00 0.00 0.00 0.00 00:31:53.608 00:31:54.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.550 Nvme0n1 : 9.00 21681.33 84.69 0.00 0.00 0.00 0.00 0.00 00:31:54.550 [2024-10-11T10:05:39.182Z] =================================================================================================================== 00:31:54.550 [2024-10-11T10:05:39.182Z] Total : 21681.33 84.69 0.00 0.00 0.00 0.00 0.00 00:31:54.550 00:31:55.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.496 Nvme0n1 : 10.00 22060.30 86.17 0.00 0.00 0.00 0.00 0.00 00:31:55.496 [2024-10-11T10:05:40.128Z] =================================================================================================================== 00:31:55.496 [2024-10-11T10:05:40.128Z] Total : 22060.30 86.17 0.00 0.00 0.00 0.00 0.00 00:31:55.496 00:31:55.496 00:31:55.496 Latency(us) 00:31:55.496 [2024-10-11T10:05:40.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.496 Nvme0n1 : 10.00 22059.85 86.17 0.00 0.00 5798.93 3686.40 28835.84 00:31:55.496 [2024-10-11T10:05:40.128Z] =================================================================================================================== 00:31:55.496 [2024-10-11T10:05:40.128Z] Total : 22059.85 86.17 0.00 0.00 5798.93 3686.40 28835.84 00:31:55.496 { 00:31:55.496 "results": [ 00:31:55.496 { 00:31:55.496 "job": "Nvme0n1", 00:31:55.496 "core_mask": "0x2", 00:31:55.496 "workload": "randwrite", 00:31:55.496 "status": "finished", 00:31:55.496 "queue_depth": 128, 00:31:55.496 "io_size": 4096, 00:31:55.496 "runtime": 10.003149, 00:31:55.496 "iops": 22059.853352179398, 00:31:55.496 "mibps": 86.17130215695077, 00:31:55.496 "io_failed": 0, 00:31:55.496 "io_timeout": 0, 00:31:55.496 "avg_latency_us": 5798.929120186585, 00:31:55.496 "min_latency_us": 3686.4, 00:31:55.496 "max_latency_us": 28835.84 00:31:55.496 } 00:31:55.496 ], 00:31:55.496 "core_count": 1 00:31:55.496 } 00:31:55.496 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1244712 00:31:55.496 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1244712 ']' 00:31:55.496 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1244712 00:31:55.496 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:31:55.496 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:55.496 12:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1244712 00:31:55.496 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:55.496 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:55.496 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1244712' 00:31:55.496 killing process with pid 1244712 00:31:55.496 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1244712 00:31:55.496 Received shutdown signal, test time was about 10.000000 seconds 00:31:55.496 00:31:55.496 Latency(us) 00:31:55.496 [2024-10-11T10:05:40.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.496 [2024-10-11T10:05:40.128Z] =================================================================================================================== 00:31:55.496 [2024-10-11T10:05:40.128Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:55.496 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1244712 00:31:55.757 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:55.757 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:56.018 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:56.018 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:56.279 [2024-10-11 12:05:40.833289] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:56.279 12:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:56.540 request: 00:31:56.540 { 00:31:56.540 "uuid": "192b5de5-8b25-452c-84b0-1324c4e1973f", 00:31:56.540 "method": "bdev_lvol_get_lvstores", 00:31:56.540 "req_id": 1 00:31:56.540 } 00:31:56.540 Got JSON-RPC error response 00:31:56.540 response: 00:31:56.540 { 00:31:56.540 "code": -19, 00:31:56.540 "message": "No such device" 00:31:56.540 } 00:31:56.540 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:31:56.540 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:56.540 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:56.541 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:56.541 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:56.802 aio_bdev 00:31:56.802 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f9b4e742-b847-4ef7-a040-e02f0b8ebbb7 00:31:56.802 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=f9b4e742-b847-4ef7-a040-e02f0b8ebbb7 00:31:56.802 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:56.802 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:31:56.802 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:56.802 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:56.802 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:56.802 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f9b4e742-b847-4ef7-a040-e02f0b8ebbb7 -t 2000 00:31:57.062 [ 00:31:57.062 { 00:31:57.062 "name": "f9b4e742-b847-4ef7-a040-e02f0b8ebbb7", 00:31:57.062 "aliases": [ 00:31:57.062 "lvs/lvol" 00:31:57.062 ], 00:31:57.062 "product_name": "Logical Volume", 00:31:57.062 "block_size": 4096, 00:31:57.062 "num_blocks": 38912, 00:31:57.062 "uuid": "f9b4e742-b847-4ef7-a040-e02f0b8ebbb7", 00:31:57.062 "assigned_rate_limits": { 00:31:57.062 "rw_ios_per_sec": 0, 00:31:57.062 "rw_mbytes_per_sec": 0, 00:31:57.062 "r_mbytes_per_sec": 0, 00:31:57.062 "w_mbytes_per_sec": 0 00:31:57.062 }, 00:31:57.062 "claimed": false, 00:31:57.062 "zoned": false, 00:31:57.062 "supported_io_types": { 00:31:57.062 "read": true, 00:31:57.062 "write": true, 00:31:57.062 "unmap": true, 00:31:57.062 "flush": false, 00:31:57.062 "reset": true, 00:31:57.062 "nvme_admin": false, 00:31:57.062 "nvme_io": false, 00:31:57.062 "nvme_io_md": false, 00:31:57.062 "write_zeroes": true, 00:31:57.062 "zcopy": false, 00:31:57.062 "get_zone_info": false, 00:31:57.062 "zone_management": false, 00:31:57.062 "zone_append": false, 00:31:57.062 "compare": false, 00:31:57.062 "compare_and_write": false, 00:31:57.062 "abort": false, 00:31:57.062 "seek_hole": true, 00:31:57.062 "seek_data": true, 00:31:57.062 "copy": false, 00:31:57.062 "nvme_iov_md": false 00:31:57.062 }, 00:31:57.062 "driver_specific": { 00:31:57.063 "lvol": { 00:31:57.063 "lvol_store_uuid": "192b5de5-8b25-452c-84b0-1324c4e1973f", 00:31:57.063 "base_bdev": "aio_bdev", 00:31:57.063 "thin_provision": false, 00:31:57.063 "num_allocated_clusters": 38, 00:31:57.063 "snapshot": false, 00:31:57.063 "clone": false, 00:31:57.063 "esnap_clone": false 00:31:57.063 } 00:31:57.063 } 00:31:57.063 } 00:31:57.063 ] 00:31:57.063 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:31:57.063 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:57.063 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:57.323 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:57.323 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:57.323 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:57.323 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:57.323 12:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9b4e742-b847-4ef7-a040-e02f0b8ebbb7 00:31:57.584 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 192b5de5-8b25-452c-84b0-1324c4e1973f 00:31:57.845 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:57.845 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:57.845 00:31:57.845 real 0m15.798s 00:31:57.845 user 0m15.519s 00:31:57.845 sys 0m1.417s 00:31:57.845 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.845 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:57.845 ************************************ 00:31:57.845 END TEST lvs_grow_clean 00:31:57.845 ************************************ 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:58.105 ************************************ 00:31:58.105 START TEST lvs_grow_dirty 00:31:58.105 ************************************ 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.105 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:58.366 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:58.366 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:58.366 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4d139e87-13b8-40e1-8960-4b92631c7c71 00:31:58.366 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:31:58.366 12:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:58.627 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:58.627 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:58.627 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d139e87-13b8-40e1-8960-4b92631c7c71 lvol 150 00:31:58.888 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=56cf8a26-039c-41d0-87e6-1c991d8a6ada 00:31:58.888 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.888 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:58.888 [2024-10-11 12:05:43.421198] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:58.888 [2024-10-11 12:05:43.421348] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:58.888 true 00:31:58.888 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:58.888 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:31:59.149 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:59.149 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:59.409 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56cf8a26-039c-41d0-87e6-1c991d8a6ada 00:31:59.409 12:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:59.670 [2024-10-11 12:05:44.093731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1247780 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1247780 /var/tmp/bdevperf.sock 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1247780 ']' 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:59.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:59.670 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:59.670 [2024-10-11 12:05:44.296267] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:31:59.670 [2024-10-11 12:05:44.296317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247780 ] 00:31:59.931 [2024-10-11 12:05:44.374282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.931 [2024-10-11 12:05:44.404125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.931 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.931 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:59.931 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:00.191 Nvme0n1 00:32:00.191 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:00.453 [ 00:32:00.453 { 00:32:00.453 "name": "Nvme0n1", 00:32:00.453 "aliases": [ 00:32:00.453 "56cf8a26-039c-41d0-87e6-1c991d8a6ada" 00:32:00.453 ], 00:32:00.453 "product_name": "NVMe disk", 00:32:00.453 "block_size": 4096, 00:32:00.453 "num_blocks": 38912, 00:32:00.453 "uuid": "56cf8a26-039c-41d0-87e6-1c991d8a6ada", 00:32:00.453 "numa_id": 0, 00:32:00.453 "assigned_rate_limits": { 00:32:00.453 "rw_ios_per_sec": 0, 00:32:00.453 "rw_mbytes_per_sec": 0, 00:32:00.453 "r_mbytes_per_sec": 0, 00:32:00.453 "w_mbytes_per_sec": 0 00:32:00.453 }, 00:32:00.453 "claimed": false, 00:32:00.453 "zoned": false, 00:32:00.453 "supported_io_types": { 00:32:00.453 "read": true, 00:32:00.453 "write": true, 00:32:00.453 "unmap": true, 00:32:00.453 "flush": true, 00:32:00.453 "reset": true, 00:32:00.453 "nvme_admin": true, 00:32:00.453 "nvme_io": true, 00:32:00.453 "nvme_io_md": false, 00:32:00.453 "write_zeroes": true, 00:32:00.453 "zcopy": false, 00:32:00.453 "get_zone_info": false, 00:32:00.453 "zone_management": false, 00:32:00.453 "zone_append": false, 00:32:00.453 "compare": true, 00:32:00.453 "compare_and_write": true, 00:32:00.453 "abort": true, 00:32:00.453 "seek_hole": false, 00:32:00.453 "seek_data": false, 00:32:00.453 "copy": true, 00:32:00.453 "nvme_iov_md": false 00:32:00.453 }, 00:32:00.453 "memory_domains": [ 00:32:00.453 { 00:32:00.453 "dma_device_id": "system", 00:32:00.453 "dma_device_type": 1 00:32:00.453 } 00:32:00.453 ], 00:32:00.453 "driver_specific": { 00:32:00.453 "nvme": [ 00:32:00.453 { 00:32:00.453 "trid": { 00:32:00.453 "trtype": "TCP", 00:32:00.453 "adrfam": "IPv4", 00:32:00.453 "traddr": "10.0.0.2", 00:32:00.453 "trsvcid": "4420", 00:32:00.453 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:00.453 }, 00:32:00.453 "ctrlr_data": { 00:32:00.453 "cntlid": 1, 00:32:00.453 "vendor_id": "0x8086", 00:32:00.453 "model_number": "SPDK bdev Controller", 00:32:00.453 "serial_number": "SPDK0", 00:32:00.453 "firmware_revision": "25.01", 00:32:00.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.453 "oacs": { 00:32:00.453 "security": 0, 00:32:00.453 "format": 0, 00:32:00.453 "firmware": 0, 00:32:00.453 "ns_manage": 0 00:32:00.453 }, 00:32:00.453 "multi_ctrlr": true, 00:32:00.453 "ana_reporting": false 00:32:00.453 }, 00:32:00.453 "vs": { 00:32:00.453 "nvme_version": "1.3" 00:32:00.453 }, 00:32:00.453 "ns_data": { 00:32:00.453 "id": 1, 00:32:00.453 "can_share": true 00:32:00.453 } 00:32:00.453 } 00:32:00.453 ], 00:32:00.453 "mp_policy": "active_passive" 00:32:00.453 } 00:32:00.453 } 00:32:00.453 ] 00:32:00.453 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1247876 00:32:00.453 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:00.453 12:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:00.453 Running I/O for 10 seconds... 00:32:01.395 Latency(us) 00:32:01.395 [2024-10-11T10:05:46.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:01.395 Nvme0n1 : 1.00 24380.00 95.23 0.00 0.00 0.00 0.00 0.00 00:32:01.395 [2024-10-11T10:05:46.027Z] =================================================================================================================== 00:32:01.395 [2024-10-11T10:05:46.027Z] Total : 24380.00 95.23 0.00 0.00 0.00 0.00 0.00 00:32:01.395 00:32:02.336 12:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:02.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:02.597 Nvme0n1 : 2.00 24798.00 96.87 0.00 0.00 0.00 0.00 0.00 00:32:02.597 [2024-10-11T10:05:47.229Z] =================================================================================================================== 00:32:02.597 [2024-10-11T10:05:47.229Z] Total : 24798.00 96.87 0.00 0.00 0.00 0.00 0.00 00:32:02.597 00:32:02.597 true 00:32:02.597 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:02.597 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:02.858 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:02.858 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:02.858 12:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1247876 00:32:03.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.430 Nvme0n1 : 3.00 24980.00 97.58 0.00 0.00 0.00 0.00 0.00 00:32:03.430 [2024-10-11T10:05:48.062Z] =================================================================================================================== 00:32:03.430 [2024-10-11T10:05:48.062Z] Total : 24980.00 97.58 0.00 0.00 0.00 0.00 0.00 00:32:03.430 00:32:04.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.374 Nvme0n1 : 4.00 25071.00 97.93 0.00 0.00 0.00 0.00 0.00 00:32:04.374 [2024-10-11T10:05:49.006Z] =================================================================================================================== 00:32:04.374 [2024-10-11T10:05:49.006Z] Total : 25071.00 97.93 0.00 0.00 0.00 0.00 0.00 00:32:04.374 00:32:05.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.762 Nvme0n1 : 5.00 25138.20 98.20 0.00 0.00 0.00 0.00 0.00 00:32:05.762 [2024-10-11T10:05:50.394Z] =================================================================================================================== 00:32:05.762 [2024-10-11T10:05:50.394Z] Total : 25138.20 98.20 0.00 0.00 0.00 0.00 0.00 00:32:05.762 00:32:06.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.706 Nvme0n1 : 6.00 25172.50 98.33 0.00 0.00 0.00 0.00 0.00 00:32:06.706 [2024-10-11T10:05:51.338Z] =================================================================================================================== 00:32:06.706 [2024-10-11T10:05:51.338Z] Total : 25172.50 98.33 0.00 0.00 0.00 0.00 0.00 00:32:06.706 00:32:07.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.649 Nvme0n1 : 7.00 25215.14 98.50 0.00 0.00 0.00 0.00 0.00 00:32:07.649 [2024-10-11T10:05:52.281Z] =================================================================================================================== 00:32:07.649 [2024-10-11T10:05:52.281Z] Total : 25215.14 98.50 0.00 0.00 0.00 0.00 0.00 00:32:07.649 00:32:08.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.593 Nvme0n1 : 8.00 25239.50 98.59 0.00 0.00 0.00 0.00 0.00 00:32:08.593 [2024-10-11T10:05:53.225Z] =================================================================================================================== 00:32:08.593 [2024-10-11T10:05:53.225Z] Total : 25239.50 98.59 0.00 0.00 0.00 0.00 0.00 00:32:08.593 00:32:09.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.536 Nvme0n1 : 9.00 25265.33 98.69 0.00 0.00 0.00 0.00 0.00 00:32:09.536 [2024-10-11T10:05:54.168Z] =================================================================================================================== 00:32:09.536 [2024-10-11T10:05:54.168Z] Total : 25265.33 98.69 0.00 0.00 0.00 0.00 0.00 00:32:09.536 00:32:10.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.480 Nvme0n1 : 10.00 25286.00 98.77 0.00 0.00 0.00 0.00 0.00 00:32:10.480 [2024-10-11T10:05:55.112Z] =================================================================================================================== 00:32:10.480 [2024-10-11T10:05:55.112Z] Total : 25286.00 98.77 0.00 0.00 0.00 0.00 0.00 00:32:10.480 00:32:10.480 00:32:10.480 Latency(us) 00:32:10.480 [2024-10-11T10:05:55.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.480 Nvme0n1 : 10.00 25283.77 98.76 0.00 0.00 5059.34 3263.15 31238.83 00:32:10.480 [2024-10-11T10:05:55.112Z] =================================================================================================================== 00:32:10.480 [2024-10-11T10:05:55.112Z] Total : 25283.77 98.76 0.00 0.00 5059.34 3263.15 31238.83 00:32:10.480 { 00:32:10.480 "results": [ 00:32:10.480 { 00:32:10.480 "job": "Nvme0n1", 00:32:10.480 "core_mask": "0x2", 00:32:10.480 "workload": "randwrite", 00:32:10.480 "status": "finished", 00:32:10.480 "queue_depth": 128, 00:32:10.480 "io_size": 4096, 00:32:10.480 "runtime": 10.003333, 00:32:10.480 "iops": 25283.77291848627, 00:32:10.480 "mibps": 98.76473796283699, 00:32:10.480 "io_failed": 0, 00:32:10.480 "io_timeout": 0, 00:32:10.480 "avg_latency_us": 5059.344310103511, 00:32:10.480 "min_latency_us": 3263.1466666666665, 00:32:10.480 "max_latency_us": 31238.826666666668 00:32:10.480 } 00:32:10.480 ], 00:32:10.480 "core_count": 1 00:32:10.480 } 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1247780 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1247780 ']' 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1247780 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1247780 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1247780' 00:32:10.481 killing process with pid 1247780 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1247780 00:32:10.481 Received shutdown signal, test time was about 10.000000 seconds 00:32:10.481 00:32:10.481 Latency(us) 00:32:10.481 [2024-10-11T10:05:55.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.481 [2024-10-11T10:05:55.113Z] =================================================================================================================== 00:32:10.481 [2024-10-11T10:05:55.113Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:10.481 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1247780 00:32:10.750 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:10.750 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:11.059 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:11.059 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1244235 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1244235 00:32:11.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1244235 Killed "${NVMF_APP[@]}" "$@" 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1249997 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1249997 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1249997 ']' 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:11.361 12:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:11.361 [2024-10-11 12:05:55.836277] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:11.361 [2024-10-11 12:05:55.837406] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:11.361 [2024-10-11 12:05:55.837464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.361 [2024-10-11 12:05:55.927048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.361 [2024-10-11 12:05:55.961864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.361 [2024-10-11 12:05:55.961895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.361 [2024-10-11 12:05:55.961901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.361 [2024-10-11 12:05:55.961905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.361 [2024-10-11 12:05:55.961910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.361 [2024-10-11 12:05:55.962368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.670 [2024-10-11 12:05:56.014608] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:11.670 [2024-10-11 12:05:56.014808] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:12.294 [2024-10-11 12:05:56.848738] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:12.294 [2024-10-11 12:05:56.849030] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:12.294 [2024-10-11 12:05:56.849119] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 56cf8a26-039c-41d0-87e6-1c991d8a6ada 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=56cf8a26-039c-41d0-87e6-1c991d8a6ada 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:12.294 12:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:12.565 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56cf8a26-039c-41d0-87e6-1c991d8a6ada -t 2000 00:32:12.826 [ 00:32:12.826 { 00:32:12.826 "name": "56cf8a26-039c-41d0-87e6-1c991d8a6ada", 00:32:12.826 "aliases": [ 00:32:12.826 "lvs/lvol" 00:32:12.826 ], 00:32:12.826 "product_name": "Logical Volume", 00:32:12.826 "block_size": 4096, 00:32:12.826 "num_blocks": 38912, 00:32:12.826 "uuid": "56cf8a26-039c-41d0-87e6-1c991d8a6ada", 00:32:12.826 "assigned_rate_limits": { 00:32:12.826 "rw_ios_per_sec": 0, 00:32:12.826 "rw_mbytes_per_sec": 0, 00:32:12.826 "r_mbytes_per_sec": 0, 00:32:12.826 "w_mbytes_per_sec": 0 00:32:12.826 }, 00:32:12.826 "claimed": false, 00:32:12.826 "zoned": false, 00:32:12.826 "supported_io_types": { 00:32:12.826 "read": true, 00:32:12.826 "write": true, 00:32:12.826 "unmap": true, 00:32:12.826 "flush": false, 00:32:12.826 "reset": true, 00:32:12.826 "nvme_admin": false, 00:32:12.826 "nvme_io": false, 00:32:12.826 "nvme_io_md": false, 00:32:12.826 "write_zeroes": true, 00:32:12.826 "zcopy": false, 00:32:12.826 "get_zone_info": false, 00:32:12.826 "zone_management": false, 00:32:12.826 "zone_append": false, 00:32:12.826 "compare": false, 00:32:12.826 "compare_and_write": false, 00:32:12.826 "abort": false, 00:32:12.826 "seek_hole": true, 00:32:12.826 "seek_data": true, 00:32:12.826 "copy": false, 00:32:12.826 "nvme_iov_md": false 00:32:12.826 }, 00:32:12.826 "driver_specific": { 00:32:12.826 "lvol": { 00:32:12.826 "lvol_store_uuid": "4d139e87-13b8-40e1-8960-4b92631c7c71", 00:32:12.826 "base_bdev": "aio_bdev", 00:32:12.826 "thin_provision": false, 00:32:12.826 "num_allocated_clusters": 38, 00:32:12.826 "snapshot": false, 00:32:12.826 "clone": false, 00:32:12.826 "esnap_clone": false 00:32:12.826 } 00:32:12.826 } 00:32:12.826 } 00:32:12.826 ] 00:32:12.826 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:12.826 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:12.826 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:12.826 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:12.826 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:12.826 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:13.087 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:13.087 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:13.349 [2024-10-11 12:05:57.746927] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:13.349 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:13.610 request: 00:32:13.610 { 00:32:13.610 "uuid": "4d139e87-13b8-40e1-8960-4b92631c7c71", 00:32:13.610 "method": "bdev_lvol_get_lvstores", 00:32:13.610 "req_id": 1 00:32:13.610 } 00:32:13.610 Got JSON-RPC error response 00:32:13.610 response: 00:32:13.610 { 00:32:13.610 "code": -19, 00:32:13.610 "message": "No such device" 00:32:13.610 } 00:32:13.610 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:13.610 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:13.610 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:13.610 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:13.610 12:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:13.610 aio_bdev 00:32:13.610 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 56cf8a26-039c-41d0-87e6-1c991d8a6ada 00:32:13.610 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=56cf8a26-039c-41d0-87e6-1c991d8a6ada 00:32:13.610 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:13.610 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:32:13.610 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:13.610 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:13.610 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:13.872 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 56cf8a26-039c-41d0-87e6-1c991d8a6ada -t 2000 00:32:14.133 [ 00:32:14.133 { 00:32:14.133 "name": "56cf8a26-039c-41d0-87e6-1c991d8a6ada", 00:32:14.133 "aliases": [ 00:32:14.133 "lvs/lvol" 00:32:14.133 ], 00:32:14.133 "product_name": "Logical Volume", 00:32:14.133 "block_size": 4096, 00:32:14.133 "num_blocks": 38912, 00:32:14.133 "uuid": "56cf8a26-039c-41d0-87e6-1c991d8a6ada", 00:32:14.133 "assigned_rate_limits": { 00:32:14.133 "rw_ios_per_sec": 0, 00:32:14.133 "rw_mbytes_per_sec": 0, 00:32:14.133 "r_mbytes_per_sec": 0, 00:32:14.133 "w_mbytes_per_sec": 0 00:32:14.133 }, 00:32:14.133 "claimed": false, 00:32:14.133 "zoned": false, 00:32:14.133 "supported_io_types": { 00:32:14.133 "read": true, 00:32:14.133 "write": true, 00:32:14.133 "unmap": true, 00:32:14.133 "flush": false, 00:32:14.133 "reset": true, 00:32:14.133 "nvme_admin": false, 00:32:14.133 "nvme_io": false, 00:32:14.133 "nvme_io_md": false, 00:32:14.133 "write_zeroes": true, 00:32:14.133 "zcopy": false, 00:32:14.133 "get_zone_info": false, 00:32:14.133 "zone_management": false, 00:32:14.133 "zone_append": false, 00:32:14.133 "compare": false, 00:32:14.133 "compare_and_write": false, 00:32:14.133 "abort": false, 00:32:14.133 "seek_hole": true, 00:32:14.133 "seek_data": true, 00:32:14.133 "copy": false, 00:32:14.133 "nvme_iov_md": false 00:32:14.133 }, 00:32:14.133 "driver_specific": { 00:32:14.133 "lvol": { 00:32:14.133 "lvol_store_uuid": "4d139e87-13b8-40e1-8960-4b92631c7c71", 00:32:14.133 "base_bdev": "aio_bdev", 00:32:14.133 "thin_provision": false, 00:32:14.133 "num_allocated_clusters": 38, 00:32:14.133 "snapshot": false, 00:32:14.133 "clone": false, 00:32:14.133 "esnap_clone": false 00:32:14.133 } 00:32:14.133 } 00:32:14.133 } 00:32:14.133 ] 00:32:14.133 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:32:14.133 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:14.133 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:14.133 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:14.133 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:14.133 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:14.394 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:14.394 12:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 56cf8a26-039c-41d0-87e6-1c991d8a6ada 00:32:14.655 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d139e87-13b8-40e1-8960-4b92631c7c71 00:32:14.916 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:14.916 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.916 00:32:14.916 real 0m16.950s 00:32:14.916 user 0m34.426s 00:32:14.916 sys 0m3.289s 00:32:14.916 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:14.916 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:14.916 ************************************ 00:32:14.916 END TEST lvs_grow_dirty 00:32:14.916 ************************************ 00:32:14.916 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:14.917 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:32:14.917 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:32:14.917 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:32:14.917 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:15.177 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:32:15.177 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:15.178 nvmf_trace.0 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:15.178 rmmod nvme_tcp 00:32:15.178 rmmod nvme_fabrics 00:32:15.178 rmmod nvme_keyring 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1249997 ']' 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1249997 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1249997 ']' 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1249997 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1249997 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1249997' 00:32:15.178 killing process with pid 1249997 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1249997 00:32:15.178 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1249997 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.439 12:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.354 12:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:17.354 00:32:17.354 real 0m44.182s 00:32:17.354 user 0m52.935s 00:32:17.354 sys 0m10.892s 00:32:17.354 12:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:17.354 12:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:17.354 ************************************ 00:32:17.354 END TEST nvmf_lvs_grow 00:32:17.354 ************************************ 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:17.615 ************************************ 00:32:17.615 START TEST nvmf_bdev_io_wait 00:32:17.615 ************************************ 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:17.615 * Looking for test storage... 00:32:17.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:17.615 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:17.616 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.616 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.616 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.878 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.878 --rc genhtml_branch_coverage=1 00:32:17.878 --rc genhtml_function_coverage=1 00:32:17.878 --rc genhtml_legend=1 00:32:17.878 --rc geninfo_all_blocks=1 00:32:17.879 --rc geninfo_unexecuted_blocks=1 00:32:17.879 00:32:17.879 ' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:17.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.879 --rc genhtml_branch_coverage=1 00:32:17.879 --rc genhtml_function_coverage=1 00:32:17.879 --rc genhtml_legend=1 00:32:17.879 --rc geninfo_all_blocks=1 00:32:17.879 --rc geninfo_unexecuted_blocks=1 00:32:17.879 00:32:17.879 ' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:17.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.879 --rc genhtml_branch_coverage=1 00:32:17.879 --rc genhtml_function_coverage=1 00:32:17.879 --rc genhtml_legend=1 00:32:17.879 --rc geninfo_all_blocks=1 00:32:17.879 --rc geninfo_unexecuted_blocks=1 00:32:17.879 00:32:17.879 ' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:17.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.879 --rc genhtml_branch_coverage=1 00:32:17.879 --rc genhtml_function_coverage=1 00:32:17.879 --rc genhtml_legend=1 00:32:17.879 --rc geninfo_all_blocks=1 00:32:17.879 --rc geninfo_unexecuted_blocks=1 00:32:17.879 00:32:17.879 ' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.879 12:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:26.028 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:26.029 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:26.029 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:26.029 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:26.029 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:26.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:32:26.029 00:32:26.029 --- 10.0.0.2 ping statistics --- 00:32:26.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.029 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:32:26.029 00:32:26.029 --- 10.0.0.1 ping statistics --- 00:32:26.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.029 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1255235 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1255235 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1255235 ']' 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:26.029 12:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.029 [2024-10-11 12:06:09.865727] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:26.029 [2024-10-11 12:06:09.866818] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:26.030 [2024-10-11 12:06:09.866867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.030 [2024-10-11 12:06:09.955010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:26.030 [2024-10-11 12:06:10.012262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:26.030 [2024-10-11 12:06:10.012317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:26.030 [2024-10-11 12:06:10.012328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.030 [2024-10-11 12:06:10.012335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.030 [2024-10-11 12:06:10.012342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:26.030 [2024-10-11 12:06:10.014526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.030 [2024-10-11 12:06:10.014724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:26.030 [2024-10-11 12:06:10.014835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:26.030 [2024-10-11 12:06:10.014838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.030 [2024-10-11 12:06:10.015573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.292 [2024-10-11 12:06:10.815925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:26.292 [2024-10-11 12:06:10.816430] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:26.292 [2024-10-11 12:06:10.816506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:26.292 [2024-10-11 12:06:10.816657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.292 [2024-10-11 12:06:10.827779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.292 Malloc0 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:26.292 [2024-10-11 12:06:10.900266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1255761 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1255763 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:26.292 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:26.293 { 00:32:26.293 "params": { 00:32:26.293 "name": "Nvme$subsystem", 00:32:26.293 "trtype": "$TEST_TRANSPORT", 00:32:26.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:26.293 "adrfam": "ipv4", 00:32:26.293 "trsvcid": "$NVMF_PORT", 00:32:26.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:26.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:26.293 "hdgst": ${hdgst:-false}, 00:32:26.293 "ddgst": ${ddgst:-false} 00:32:26.293 }, 00:32:26.293 "method": "bdev_nvme_attach_controller" 00:32:26.293 } 00:32:26.293 EOF 00:32:26.293 )") 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1255765 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1255768 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:26.293 { 00:32:26.293 "params": { 00:32:26.293 "name": "Nvme$subsystem", 00:32:26.293 "trtype": "$TEST_TRANSPORT", 00:32:26.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:26.293 "adrfam": "ipv4", 00:32:26.293 "trsvcid": "$NVMF_PORT", 00:32:26.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:26.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:26.293 "hdgst": ${hdgst:-false}, 00:32:26.293 "ddgst": ${ddgst:-false} 00:32:26.293 }, 00:32:26.293 "method": "bdev_nvme_attach_controller" 00:32:26.293 } 00:32:26.293 EOF 00:32:26.293 )") 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:26.293 { 00:32:26.293 "params": { 00:32:26.293 "name": "Nvme$subsystem", 00:32:26.293 "trtype": "$TEST_TRANSPORT", 00:32:26.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:26.293 "adrfam": "ipv4", 00:32:26.293 "trsvcid": "$NVMF_PORT", 00:32:26.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:26.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:26.293 "hdgst": ${hdgst:-false}, 00:32:26.293 "ddgst": ${ddgst:-false} 00:32:26.293 }, 00:32:26.293 "method": "bdev_nvme_attach_controller" 00:32:26.293 } 00:32:26.293 EOF 00:32:26.293 )") 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:26.293 { 00:32:26.293 "params": { 00:32:26.293 "name": "Nvme$subsystem", 00:32:26.293 "trtype": "$TEST_TRANSPORT", 00:32:26.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:26.293 "adrfam": "ipv4", 00:32:26.293 "trsvcid": "$NVMF_PORT", 00:32:26.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:26.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:26.293 "hdgst": ${hdgst:-false}, 00:32:26.293 "ddgst": ${ddgst:-false} 00:32:26.293 }, 00:32:26.293 "method": "bdev_nvme_attach_controller" 00:32:26.293 } 00:32:26.293 EOF 00:32:26.293 )") 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1255761 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:26.293 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:26.293 "params": { 00:32:26.293 "name": "Nvme1", 00:32:26.293 "trtype": "tcp", 00:32:26.293 "traddr": "10.0.0.2", 00:32:26.293 "adrfam": "ipv4", 00:32:26.293 "trsvcid": "4420", 00:32:26.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:26.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:26.293 "hdgst": false, 00:32:26.293 "ddgst": false 00:32:26.293 }, 00:32:26.293 "method": "bdev_nvme_attach_controller" 00:32:26.293 }' 00:32:26.554 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:26.555 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:26.555 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:26.555 "params": { 00:32:26.555 "name": "Nvme1", 00:32:26.555 "trtype": "tcp", 00:32:26.555 "traddr": "10.0.0.2", 00:32:26.555 "adrfam": "ipv4", 00:32:26.555 "trsvcid": "4420", 00:32:26.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:26.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:26.555 "hdgst": false, 00:32:26.555 "ddgst": false 00:32:26.555 }, 00:32:26.555 "method": "bdev_nvme_attach_controller" 00:32:26.555 }' 00:32:26.555 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:26.555 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:26.555 "params": { 00:32:26.555 "name": "Nvme1", 00:32:26.555 "trtype": "tcp", 00:32:26.555 "traddr": "10.0.0.2", 00:32:26.555 "adrfam": "ipv4", 00:32:26.555 "trsvcid": "4420", 00:32:26.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:26.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:26.555 "hdgst": false, 00:32:26.555 "ddgst": false 00:32:26.555 }, 00:32:26.555 "method": "bdev_nvme_attach_controller" 00:32:26.555 }' 00:32:26.555 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:26.555 12:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:26.555 "params": { 00:32:26.555 "name": "Nvme1", 00:32:26.555 "trtype": "tcp", 00:32:26.555 "traddr": "10.0.0.2", 00:32:26.555 "adrfam": "ipv4", 00:32:26.555 "trsvcid": "4420", 00:32:26.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:26.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:26.555 "hdgst": false, 00:32:26.555 "ddgst": false 00:32:26.555 }, 00:32:26.555 "method": "bdev_nvme_attach_controller" 00:32:26.555 }' 00:32:26.555 [2024-10-11 12:06:10.953733] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:26.555 [2024-10-11 12:06:10.953803] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:26.555 [2024-10-11 12:06:10.960908] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:26.555 [2024-10-11 12:06:10.960970] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:26.555 [2024-10-11 12:06:10.960959] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:26.555 [2024-10-11 12:06:10.960959] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:26.555 [2024-10-11 12:06:10.961023] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:26.555 [2024-10-11 12:06:10.961030] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:26.555 [2024-10-11 12:06:11.162656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.815 [2024-10-11 12:06:11.205648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:26.815 [2024-10-11 12:06:11.229545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.815 [2024-10-11 12:06:11.269565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:26.815 [2024-10-11 12:06:11.298136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.815 [2024-10-11 12:06:11.334378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:26.815 [2024-10-11 12:06:11.361454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.815 [2024-10-11 12:06:11.399529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:27.076 Running I/O for 1 seconds... 00:32:27.076 Running I/O for 1 seconds... 00:32:27.076 Running I/O for 1 seconds... 00:32:27.076 Running I/O for 1 seconds... 00:32:28.021 10292.00 IOPS, 40.20 MiB/s 00:32:28.021 Latency(us) 00:32:28.021 [2024-10-11T10:06:12.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.021 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:28.021 Nvme1n1 : 1.01 10353.62 40.44 0.00 0.00 12314.63 2334.72 14417.92 00:32:28.021 [2024-10-11T10:06:12.653Z] =================================================================================================================== 00:32:28.021 [2024-10-11T10:06:12.653Z] Total : 10353.62 40.44 0.00 0.00 12314.63 2334.72 14417.92 00:32:28.021 10299.00 IOPS, 40.23 MiB/s 00:32:28.021 Latency(us) 00:32:28.021 [2024-10-11T10:06:12.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.021 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:28.021 Nvme1n1 : 1.01 10369.25 40.50 0.00 0.00 12301.20 2921.81 16602.45 00:32:28.021 [2024-10-11T10:06:12.653Z] =================================================================================================================== 00:32:28.021 [2024-10-11T10:06:12.653Z] Total : 10369.25 40.50 0.00 0.00 12301.20 2921.81 16602.45 00:32:28.021 9372.00 IOPS, 36.61 MiB/s 00:32:28.021 Latency(us) 00:32:28.021 [2024-10-11T10:06:12.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.021 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:28.021 Nvme1n1 : 1.01 9432.04 36.84 0.00 0.00 13520.17 5761.71 20862.29 00:32:28.021 [2024-10-11T10:06:12.653Z] =================================================================================================================== 00:32:28.021 [2024-10-11T10:06:12.653Z] Total : 9432.04 36.84 0.00 0.00 13520.17 5761.71 20862.29 00:32:28.021 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1255763 00:32:28.021 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1255765 00:32:28.282 187728.00 IOPS, 733.31 MiB/s 00:32:28.283 Latency(us) 00:32:28.283 [2024-10-11T10:06:12.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.283 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:28.283 Nvme1n1 : 1.00 187354.73 731.85 0.00 0.00 679.33 303.79 1966.08 00:32:28.283 [2024-10-11T10:06:12.915Z] =================================================================================================================== 00:32:28.283 [2024-10-11T10:06:12.915Z] Total : 187354.73 731.85 0.00 0.00 679.33 303.79 1966.08 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1255768 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:28.283 rmmod nvme_tcp 00:32:28.283 rmmod nvme_fabrics 00:32:28.283 rmmod nvme_keyring 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1255235 ']' 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1255235 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1255235 ']' 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1255235 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:28.283 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1255235 00:32:28.544 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:28.544 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:28.544 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1255235' 00:32:28.544 killing process with pid 1255235 00:32:28.544 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1255235 00:32:28.544 12:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1255235 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.544 12:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:31.091 00:32:31.091 real 0m13.107s 00:32:31.091 user 0m15.563s 00:32:31.091 sys 0m7.746s 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.091 ************************************ 00:32:31.091 END TEST nvmf_bdev_io_wait 00:32:31.091 ************************************ 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:31.091 ************************************ 00:32:31.091 START TEST nvmf_queue_depth 00:32:31.091 ************************************ 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:31.091 * Looking for test storage... 00:32:31.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:31.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.091 --rc genhtml_branch_coverage=1 00:32:31.091 --rc genhtml_function_coverage=1 00:32:31.091 --rc genhtml_legend=1 00:32:31.091 --rc geninfo_all_blocks=1 00:32:31.091 --rc geninfo_unexecuted_blocks=1 00:32:31.091 00:32:31.091 ' 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:31.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.091 --rc genhtml_branch_coverage=1 00:32:31.091 --rc genhtml_function_coverage=1 00:32:31.091 --rc genhtml_legend=1 00:32:31.091 --rc geninfo_all_blocks=1 00:32:31.091 --rc geninfo_unexecuted_blocks=1 00:32:31.091 00:32:31.091 ' 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:31.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.091 --rc genhtml_branch_coverage=1 00:32:31.091 --rc genhtml_function_coverage=1 00:32:31.091 --rc genhtml_legend=1 00:32:31.091 --rc geninfo_all_blocks=1 00:32:31.091 --rc geninfo_unexecuted_blocks=1 00:32:31.091 00:32:31.091 ' 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:31.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.091 --rc genhtml_branch_coverage=1 00:32:31.091 --rc genhtml_function_coverage=1 00:32:31.091 --rc genhtml_legend=1 00:32:31.091 --rc geninfo_all_blocks=1 00:32:31.091 --rc geninfo_unexecuted_blocks=1 00:32:31.091 00:32:31.091 ' 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.091 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:31.092 12:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:39.235 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:39.235 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:39.235 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:39.235 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.235 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:32:39.236 00:32:39.236 --- 10.0.0.2 ping statistics --- 00:32:39.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.236 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:32:39.236 00:32:39.236 --- 10.0.0.1 ping statistics --- 00:32:39.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.236 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1260229 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1260229 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1260229 ']' 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.236 12:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.236 [2024-10-11 12:06:23.045785] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:39.236 [2024-10-11 12:06:23.046895] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:39.236 [2024-10-11 12:06:23.046946] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.236 [2024-10-11 12:06:23.115750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.236 [2024-10-11 12:06:23.162388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.236 [2024-10-11 12:06:23.162438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.236 [2024-10-11 12:06:23.162445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.236 [2024-10-11 12:06:23.162451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.236 [2024-10-11 12:06:23.162456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.236 [2024-10-11 12:06:23.163148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.236 [2024-10-11 12:06:23.233786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:39.236 [2024-10-11 12:06:23.234019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.236 [2024-10-11 12:06:23.311961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.236 Malloc0 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.236 [2024-10-11 12:06:23.392159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1260415 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1260415 /var/tmp/bdevperf.sock 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1260415 ']' 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:39.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:39.236 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.237 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.237 [2024-10-11 12:06:23.451007] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:32:39.237 [2024-10-11 12:06:23.451072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260415 ] 00:32:39.237 [2024-10-11 12:06:23.508697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.237 [2024-10-11 12:06:23.555951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.237 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.237 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:39.237 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:39.237 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.237 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.237 NVMe0n1 00:32:39.237 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.237 12:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:39.498 Running I/O for 10 seconds... 00:32:41.396 9179.00 IOPS, 35.86 MiB/s [2024-10-11T10:06:26.974Z] 9219.00 IOPS, 36.01 MiB/s [2024-10-11T10:06:28.359Z] 9293.00 IOPS, 36.30 MiB/s [2024-10-11T10:06:29.299Z] 9993.50 IOPS, 39.04 MiB/s [2024-10-11T10:06:30.239Z] 10744.60 IOPS, 41.97 MiB/s [2024-10-11T10:06:31.180Z] 11273.50 IOPS, 44.04 MiB/s [2024-10-11T10:06:32.121Z] 11690.29 IOPS, 45.67 MiB/s [2024-10-11T10:06:33.061Z] 11986.50 IOPS, 46.82 MiB/s [2024-10-11T10:06:34.002Z] 12208.67 IOPS, 47.69 MiB/s [2024-10-11T10:06:34.262Z] 12407.50 IOPS, 48.47 MiB/s 00:32:49.630 Latency(us) 00:32:49.630 [2024-10-11T10:06:34.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.630 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:49.631 Verification LBA range: start 0x0 length 0x4000 00:32:49.631 NVMe0n1 : 10.06 12441.16 48.60 0.00 0.00 82067.07 17913.17 67720.53 00:32:49.631 [2024-10-11T10:06:34.263Z] =================================================================================================================== 00:32:49.631 [2024-10-11T10:06:34.263Z] Total : 12441.16 48.60 0.00 0.00 82067.07 17913.17 67720.53 00:32:49.631 { 00:32:49.631 "results": [ 00:32:49.631 { 00:32:49.631 "job": "NVMe0n1", 00:32:49.631 "core_mask": "0x1", 00:32:49.631 "workload": "verify", 00:32:49.631 "status": "finished", 00:32:49.631 "verify_range": { 00:32:49.631 "start": 0, 00:32:49.631 "length": 16384 00:32:49.631 }, 00:32:49.631 "queue_depth": 1024, 00:32:49.631 "io_size": 4096, 00:32:49.631 "runtime": 10.055255, 00:32:49.631 "iops": 12441.156390365039, 00:32:49.631 "mibps": 48.59826714986343, 00:32:49.631 "io_failed": 0, 00:32:49.631 "io_timeout": 0, 00:32:49.631 "avg_latency_us": 82067.07038574782, 00:32:49.631 "min_latency_us": 17913.173333333332, 00:32:49.631 "max_latency_us": 67720.53333333334 00:32:49.631 } 00:32:49.631 ], 00:32:49.631 "core_count": 1 00:32:49.631 } 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1260415 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1260415 ']' 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1260415 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1260415 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1260415' 00:32:49.631 killing process with pid 1260415 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1260415 00:32:49.631 Received shutdown signal, test time was about 10.000000 seconds 00:32:49.631 00:32:49.631 Latency(us) 00:32:49.631 [2024-10-11T10:06:34.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.631 [2024-10-11T10:06:34.263Z] =================================================================================================================== 00:32:49.631 [2024-10-11T10:06:34.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1260415 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.631 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.631 rmmod nvme_tcp 00:32:49.631 rmmod nvme_fabrics 00:32:49.891 rmmod nvme_keyring 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1260229 ']' 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1260229 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1260229 ']' 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1260229 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1260229 00:32:49.891 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1260229' 00:32:49.892 killing process with pid 1260229 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1260229 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1260229 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.892 12:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:52.457 00:32:52.457 real 0m21.316s 00:32:52.457 user 0m23.550s 00:32:52.457 sys 0m7.155s 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:52.457 ************************************ 00:32:52.457 END TEST nvmf_queue_depth 00:32:52.457 ************************************ 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:52.457 ************************************ 00:32:52.457 START TEST nvmf_target_multipath 00:32:52.457 ************************************ 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:52.457 * Looking for test storage... 00:32:52.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:52.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.457 --rc genhtml_branch_coverage=1 00:32:52.457 --rc genhtml_function_coverage=1 00:32:52.457 --rc genhtml_legend=1 00:32:52.457 --rc geninfo_all_blocks=1 00:32:52.457 --rc geninfo_unexecuted_blocks=1 00:32:52.457 00:32:52.457 ' 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:52.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.457 --rc genhtml_branch_coverage=1 00:32:52.457 --rc genhtml_function_coverage=1 00:32:52.457 --rc genhtml_legend=1 00:32:52.457 --rc geninfo_all_blocks=1 00:32:52.457 --rc geninfo_unexecuted_blocks=1 00:32:52.457 00:32:52.457 ' 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:52.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.457 --rc genhtml_branch_coverage=1 00:32:52.457 --rc genhtml_function_coverage=1 00:32:52.457 --rc genhtml_legend=1 00:32:52.457 --rc geninfo_all_blocks=1 00:32:52.457 --rc geninfo_unexecuted_blocks=1 00:32:52.457 00:32:52.457 ' 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:52.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:52.457 --rc genhtml_branch_coverage=1 00:32:52.457 --rc genhtml_function_coverage=1 00:32:52.457 --rc genhtml_legend=1 00:32:52.457 --rc geninfo_all_blocks=1 00:32:52.457 --rc geninfo_unexecuted_blocks=1 00:32:52.457 00:32:52.457 ' 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:52.457 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:52.458 12:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:00.684 12:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:00.684 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:00.684 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:00.684 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:00.684 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:00.684 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:00.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:33:00.685 00:33:00.685 --- 10.0.0.2 ping statistics --- 00:33:00.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.685 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:33:00.685 00:33:00.685 --- 10.0.0.1 ping statistics --- 00:33:00.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.685 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:00.685 only one NIC for nvmf test 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:00.685 rmmod nvme_tcp 00:33:00.685 rmmod nvme_fabrics 00:33:00.685 rmmod nvme_keyring 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:00.685 12:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:33:02.070 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.071 00:33:02.071 real 0m9.900s 00:33:02.071 user 0m2.192s 00:33:02.071 sys 0m5.667s 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:02.071 ************************************ 00:33:02.071 END TEST nvmf_target_multipath 00:33:02.071 ************************************ 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:02.071 ************************************ 00:33:02.071 START TEST nvmf_zcopy 00:33:02.071 ************************************ 00:33:02.071 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:02.333 * Looking for test storage... 00:33:02.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:02.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.333 --rc genhtml_branch_coverage=1 00:33:02.333 --rc genhtml_function_coverage=1 00:33:02.333 --rc genhtml_legend=1 00:33:02.333 --rc geninfo_all_blocks=1 00:33:02.333 --rc geninfo_unexecuted_blocks=1 00:33:02.333 00:33:02.333 ' 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:02.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.333 --rc genhtml_branch_coverage=1 00:33:02.333 --rc genhtml_function_coverage=1 00:33:02.333 --rc genhtml_legend=1 00:33:02.333 --rc geninfo_all_blocks=1 00:33:02.333 --rc geninfo_unexecuted_blocks=1 00:33:02.333 00:33:02.333 ' 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:02.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.333 --rc genhtml_branch_coverage=1 00:33:02.333 --rc genhtml_function_coverage=1 00:33:02.333 --rc genhtml_legend=1 00:33:02.333 --rc geninfo_all_blocks=1 00:33:02.333 --rc geninfo_unexecuted_blocks=1 00:33:02.333 00:33:02.333 ' 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:02.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.333 --rc genhtml_branch_coverage=1 00:33:02.333 --rc genhtml_function_coverage=1 00:33:02.333 --rc genhtml_legend=1 00:33:02.333 --rc geninfo_all_blocks=1 00:33:02.333 --rc geninfo_unexecuted_blocks=1 00:33:02.333 00:33:02.333 ' 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:02.333 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.334 12:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.476 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:10.476 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:10.476 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:10.476 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:10.476 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:10.476 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:10.476 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:10.476 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:10.477 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:10.477 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:10.477 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:10.477 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:10.477 12:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:10.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:10.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:33:10.477 00:33:10.477 --- 10.0.0.2 ping statistics --- 00:33:10.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.477 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:10.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:10.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:33:10.477 00:33:10.477 --- 10.0.0.1 ping statistics --- 00:33:10.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:10.477 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.477 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1270703 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1270703 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1270703 ']' 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:10.478 12:06:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.478 [2024-10-11 12:06:54.367163] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:10.478 [2024-10-11 12:06:54.368256] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:33:10.478 [2024-10-11 12:06:54.368308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.478 [2024-10-11 12:06:54.455881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.478 [2024-10-11 12:06:54.507019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.478 [2024-10-11 12:06:54.507070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.478 [2024-10-11 12:06:54.507078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.478 [2024-10-11 12:06:54.507086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.478 [2024-10-11 12:06:54.507092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.478 [2024-10-11 12:06:54.507855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.478 [2024-10-11 12:06:54.583962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:10.478 [2024-10-11 12:06:54.584261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:10.739 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:10.739 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:33:10.739 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:10.739 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:10.739 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.739 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.739 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:10.739 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:10.739 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.740 [2024-10-11 12:06:55.236719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.740 [2024-10-11 12:06:55.264999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.740 malloc0 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:10.740 { 00:33:10.740 "params": { 00:33:10.740 "name": "Nvme$subsystem", 00:33:10.740 "trtype": "$TEST_TRANSPORT", 00:33:10.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.740 "adrfam": "ipv4", 00:33:10.740 "trsvcid": "$NVMF_PORT", 00:33:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.740 "hdgst": ${hdgst:-false}, 00:33:10.740 "ddgst": ${ddgst:-false} 00:33:10.740 }, 00:33:10.740 "method": "bdev_nvme_attach_controller" 00:33:10.740 } 00:33:10.740 EOF 00:33:10.740 )") 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:10.740 12:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:10.740 "params": { 00:33:10.740 "name": "Nvme1", 00:33:10.740 "trtype": "tcp", 00:33:10.740 "traddr": "10.0.0.2", 00:33:10.740 "adrfam": "ipv4", 00:33:10.740 "trsvcid": "4420", 00:33:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:10.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:10.740 "hdgst": false, 00:33:10.740 "ddgst": false 00:33:10.740 }, 00:33:10.740 "method": "bdev_nvme_attach_controller" 00:33:10.740 }' 00:33:10.740 [2024-10-11 12:06:55.366902] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:33:10.740 [2024-10-11 12:06:55.366978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270870 ] 00:33:11.001 [2024-10-11 12:06:55.450713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.001 [2024-10-11 12:06:55.503808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.262 Running I/O for 10 seconds... 00:33:13.590 6335.00 IOPS, 49.49 MiB/s [2024-10-11T10:06:59.165Z] 6385.50 IOPS, 49.89 MiB/s [2024-10-11T10:07:00.108Z] 6406.67 IOPS, 50.05 MiB/s [2024-10-11T10:07:01.049Z] 6417.25 IOPS, 50.13 MiB/s [2024-10-11T10:07:01.992Z] 6592.40 IOPS, 51.50 MiB/s [2024-10-11T10:07:02.931Z] 7083.67 IOPS, 55.34 MiB/s [2024-10-11T10:07:03.874Z] 7442.14 IOPS, 58.14 MiB/s [2024-10-11T10:07:05.258Z] 7709.12 IOPS, 60.23 MiB/s [2024-10-11T10:07:06.199Z] 7913.89 IOPS, 61.83 MiB/s [2024-10-11T10:07:06.199Z] 8079.50 IOPS, 63.12 MiB/s 00:33:21.567 Latency(us) 00:33:21.567 [2024-10-11T10:07:06.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.567 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:21.567 Verification LBA range: start 0x0 length 0x1000 00:33:21.567 Nvme1n1 : 10.01 8083.76 63.15 0.00 0.00 15788.72 2280.11 29054.29 00:33:21.568 [2024-10-11T10:07:06.200Z] =================================================================================================================== 00:33:21.568 [2024-10-11T10:07:06.200Z] Total : 8083.76 63.15 0.00 0.00 15788.72 2280.11 29054.29 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1272869 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:21.568 { 00:33:21.568 "params": { 00:33:21.568 "name": "Nvme$subsystem", 00:33:21.568 "trtype": "$TEST_TRANSPORT", 00:33:21.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:21.568 "adrfam": "ipv4", 00:33:21.568 "trsvcid": "$NVMF_PORT", 00:33:21.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:21.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:21.568 "hdgst": ${hdgst:-false}, 00:33:21.568 "ddgst": ${ddgst:-false} 00:33:21.568 }, 00:33:21.568 "method": "bdev_nvme_attach_controller" 00:33:21.568 } 00:33:21.568 EOF 00:33:21.568 )") 00:33:21.568 [2024-10-11 12:07:05.972263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:05.972293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:21.568 12:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:21.568 "params": { 00:33:21.568 "name": "Nvme1", 00:33:21.568 "trtype": "tcp", 00:33:21.568 "traddr": "10.0.0.2", 00:33:21.568 "adrfam": "ipv4", 00:33:21.568 "trsvcid": "4420", 00:33:21.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:21.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:21.568 "hdgst": false, 00:33:21.568 "ddgst": false 00:33:21.568 }, 00:33:21.568 "method": "bdev_nvme_attach_controller" 00:33:21.568 }' 00:33:21.568 [2024-10-11 12:07:05.984231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:05.984242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:05.996229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:05.996237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.008228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.008235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.020229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.020236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.027992] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:33:21.568 [2024-10-11 12:07:06.028039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272869 ] 00:33:21.568 [2024-10-11 12:07:06.032229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.032236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.044229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.044235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.056229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.056236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.068229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.068236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.080228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.080236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.092229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.092239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.103475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.568 [2024-10-11 12:07:06.104228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.104235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.116229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.116240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.128229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.128238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.132632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.568 [2024-10-11 12:07:06.140228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.140235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.152236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.152247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.164231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.164243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.176232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.176242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.568 [2024-10-11 12:07:06.188229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.568 [2024-10-11 12:07:06.188237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.200239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.200255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.212230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.212240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.224232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.224243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.236240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.236250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.248230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.248239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.260235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.260249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 Running I/O for 5 seconds... 00:33:21.828 [2024-10-11 12:07:06.275652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.275675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.288493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.288508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.303719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.303735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.316907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.316922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.331683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.331698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.344847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.344865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.828 [2024-10-11 12:07:06.359092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.828 [2024-10-11 12:07:06.359107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.829 [2024-10-11 12:07:06.371992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.829 [2024-10-11 12:07:06.372007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.829 [2024-10-11 12:07:06.384208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.829 [2024-10-11 12:07:06.384223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.829 [2024-10-11 12:07:06.396300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.829 [2024-10-11 12:07:06.396315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.829 [2024-10-11 12:07:06.408946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.829 [2024-10-11 12:07:06.408960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.829 [2024-10-11 12:07:06.423057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.829 [2024-10-11 12:07:06.423072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.829 [2024-10-11 12:07:06.436031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.829 [2024-10-11 12:07:06.436045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:21.829 [2024-10-11 12:07:06.448803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:21.829 [2024-10-11 12:07:06.448818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.464081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.464096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.476637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.476652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.491760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.491776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.504588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.504604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.519836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.519851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.532928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.532943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.547343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.547359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.560517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.560531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.575311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.575327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.588430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.588445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.600897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.600915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.615205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.615221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.628403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.628417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.644013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.644028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.656932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.656947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.671130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.671145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.684253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.684268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.696722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.696737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.090 [2024-10-11 12:07:06.711565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.090 [2024-10-11 12:07:06.711580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.724477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.724492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.736955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.736970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.751434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.751449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.764841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.764856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.779713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.779728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.792719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.792734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.807633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.807649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.820366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.820380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.835147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.835161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.848104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.848118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.860859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.860878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.875493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.875508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.888103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.888118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.900581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.900595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.915468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.915483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.928350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.928365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.940806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.940820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.955388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.955403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.967946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.967961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.351 [2024-10-11 12:07:06.980517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.351 [2024-10-11 12:07:06.980531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:06.995112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:06.995128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.007854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.007868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.020358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.020373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.032884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.032898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.047540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.047555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.060884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.060899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.075385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.075399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.088311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.088326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.100623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.100638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.115635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.115657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.128252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.128268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.140459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.140474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.155125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.155140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.168072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.168087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.180675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.180690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.195613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.195628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.208448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.208462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.224021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.224036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.613 [2024-10-11 12:07:07.236742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.613 [2024-10-11 12:07:07.236756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.873 [2024-10-11 12:07:07.251225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.873 [2024-10-11 12:07:07.251240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.873 [2024-10-11 12:07:07.264076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.264091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 18890.00 IOPS, 147.58 MiB/s [2024-10-11T10:07:07.506Z] [2024-10-11 12:07:07.275836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.275851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.288926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.288940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.303664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.303683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.316255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.316270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.328617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.328631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.342771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.342785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.356648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.356662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.371559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.371574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.384445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.384458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.399330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.399344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.411893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.411907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.424252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.424267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.436976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.436990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.451798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.451812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.464933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.464948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.478917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.478931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.491509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.491524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:22.874 [2024-10-11 12:07:07.504436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:22.874 [2024-10-11 12:07:07.504450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.516803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.516817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.531385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.531400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.544094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.544109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.556613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.556627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.570776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.570791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.583939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.583954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.596494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.596508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.611313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.611329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.624223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.624238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.636442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.636456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.651656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.651674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.664567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.664581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.679531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.679546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.693250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.693264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.707618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.707632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.720229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.720243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.732094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.732108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.744976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.744990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.135 [2024-10-11 12:07:07.759505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.135 [2024-10-11 12:07:07.759520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.772418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.772433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.784882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.784897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.799630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.799645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.812343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.812357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.825334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.825349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.840246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.840261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.852839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.852853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.867413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.867428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.880200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.880215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.892880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.892894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.907904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.907918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.920549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.920563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.935292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.935306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.948435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.948450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.960012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.960027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.973304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.973319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:07.987109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:07.987124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:08.000128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:08.000143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:08.012397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:08.012411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.397 [2024-10-11 12:07:08.027146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.397 [2024-10-11 12:07:08.027161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.040494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.040509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.055693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.055707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.068772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.068786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.083866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.083880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.096661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.096680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.111025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.111039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.124423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.124441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.137284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.137298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.151682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.151697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.164511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.164525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.179346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.179361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.192108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.192123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.204314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.204329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.216843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.216857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.231890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.231905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.244639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.244653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.259543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.259558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 [2024-10-11 12:07:08.272238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.272254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.658 18914.50 IOPS, 147.77 MiB/s [2024-10-11T10:07:08.290Z] [2024-10-11 12:07:08.284524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.658 [2024-10-11 12:07:08.284538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.919 [2024-10-11 12:07:08.299647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.299663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.312603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.312617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.326635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.326650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.339364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.339378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.352291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.352305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.364126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.364141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.376962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.376980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.391486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.391501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.404703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.404717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.419480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.419496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.432070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.432086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.444384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.444398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.459276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.459291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.472320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.472335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.484558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.484572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.499961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.499975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.512373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.512387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.527192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.527207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.920 [2024-10-11 12:07:08.540544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:23.920 [2024-10-11 12:07:08.540559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.555204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.555220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.568119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.568135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.579757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.579772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.592860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.592875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.607650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.607664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.620649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.620664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.635472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.635490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.648503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.648518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.663508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.663523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.676257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.676272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.688713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.688728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.703743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.703758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.716651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.716665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.731400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.731415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.744213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.744227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.756553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.756567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.771317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.771332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.784296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.784311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.796889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.796904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.181 [2024-10-11 12:07:08.811875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.181 [2024-10-11 12:07:08.811890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.442 [2024-10-11 12:07:08.824633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.442 [2024-10-11 12:07:08.824648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.442 [2024-10-11 12:07:08.839813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.442 [2024-10-11 12:07:08.839828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.852632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.852647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.867864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.867879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.880864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.880878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.895276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.895291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.908117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.908132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.919810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.919825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.932596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.932610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.947955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.947970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.960453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.960468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.972944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.972959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:08.987406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:08.987420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:09.000520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:09.000534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:09.015463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:09.015477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:09.028383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:09.028396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:09.042934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:09.042949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:09.055619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:09.055634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.443 [2024-10-11 12:07:09.068292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.443 [2024-10-11 12:07:09.068306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.703 [2024-10-11 12:07:09.081145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.081160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.095399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.095413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.108835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.108849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.123002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.123017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.135919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.135933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.148612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.148626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.163487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.163502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.176396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.176411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.191321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.191336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.204233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.204247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.216029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.216044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.228414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.228427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.243835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.243850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.256372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.256386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.271086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.271100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 18907.00 IOPS, 147.71 MiB/s [2024-10-11T10:07:09.336Z] [2024-10-11 12:07:09.284311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.284326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.296830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.296844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.311536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.311550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.704 [2024-10-11 12:07:09.324004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.704 [2024-10-11 12:07:09.324019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.965 [2024-10-11 12:07:09.336908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.336922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.351642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.351657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.364515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.364529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.379640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.379655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.392387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.392401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.407064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.407078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.419967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.419981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.432728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.432742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.447230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.447245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.460464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.460478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.473012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.473026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.487567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.487581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.500348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.500363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.512157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.512172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.524849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.524863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.539703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.539717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.552499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.552514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.564969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.564983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.578830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.578844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:24.966 [2024-10-11 12:07:09.591777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:24.966 [2024-10-11 12:07:09.591792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.227 [2024-10-11 12:07:09.604176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.227 [2024-10-11 12:07:09.604192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.227 [2024-10-11 12:07:09.616397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.616412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.630907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.630922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.643862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.643881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.655987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.656002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.668714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.668728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.683696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.683710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.697024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.697037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.711386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.711401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.724232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.724246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.736830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.736844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.751503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.751518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.764046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.764061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.776390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.776404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.791027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.791042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.803769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.803784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.816503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.816517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.831567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.831582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.228 [2024-10-11 12:07:09.844598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.228 [2024-10-11 12:07:09.844612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.859411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.859427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.872227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.872242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.884047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.884062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.896739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.896757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.911600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.911615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.924299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.924314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.936995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.937009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.951384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.951399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.964256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.964270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.976852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.976867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:09.991506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:09.991522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:10.004586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:10.004602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:10.019322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:10.019339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:10.032098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:10.032113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:10.043787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:10.043803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:10.056899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:10.056914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:10.071615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:10.071632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:10.084324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:10.084339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:10.096728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:10.096743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.489 [2024-10-11 12:07:10.111263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.489 [2024-10-11 12:07:10.111279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.750 [2024-10-11 12:07:10.124480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.750 [2024-10-11 12:07:10.124496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.750 [2024-10-11 12:07:10.137000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.750 [2024-10-11 12:07:10.137014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.750 [2024-10-11 12:07:10.151806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.750 [2024-10-11 12:07:10.151827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.750 [2024-10-11 12:07:10.164159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.750 [2024-10-11 12:07:10.164174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.176299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.176313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.188886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.188900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.203059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.203075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.216228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.216244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.229106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.229121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.244033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.244047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.256365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.256380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.268616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.268630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 18926.75 IOPS, 147.87 MiB/s [2024-10-11T10:07:10.383Z] [2024-10-11 12:07:10.283564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.283579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.296871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.296885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.311785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.311800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.324233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.324248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.336853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.336867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.351919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.351934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.364946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.364961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.751 [2024-10-11 12:07:10.379918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:25.751 [2024-10-11 12:07:10.379933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.392981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.392995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.406889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.406905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.419653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.419673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.432394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.432407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.446930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.446945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.460116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.460131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.472502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.472516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.487583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.487598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.500325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.500339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.512117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.512132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.524596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.524611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.539375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.539389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.551988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.552003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.564989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.565003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.579810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.579825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.592946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.592960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.607899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.607914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.620786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.620800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.013 [2024-10-11 12:07:10.635897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.013 [2024-10-11 12:07:10.635912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.274 [2024-10-11 12:07:10.648923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.274 [2024-10-11 12:07:10.648938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.274 [2024-10-11 12:07:10.663386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.274 [2024-10-11 12:07:10.663401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.274 [2024-10-11 12:07:10.676386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.274 [2024-10-11 12:07:10.676401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.274 [2024-10-11 12:07:10.691589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.274 [2024-10-11 12:07:10.691604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.274 [2024-10-11 12:07:10.704857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.274 [2024-10-11 12:07:10.704871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.719284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.719298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.732189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.732204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.744878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.744891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.759630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.759645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.772688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.772702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.787278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.787292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.800890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.800904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.815347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.815362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.828118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.828132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.840763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.840777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.855369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.855384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.868197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.868211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.880303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.880317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.275 [2024-10-11 12:07:10.892737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.275 [2024-10-11 12:07:10.892751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:10.907845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:10.907860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:10.920585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:10.920599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:10.935709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:10.935723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:10.948411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:10.948425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:10.961019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:10.961033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:10.974891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:10.974906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:10.987813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:10.987827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.000838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.000852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.015217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.015231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.028567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.028582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.043018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.043032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.055716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.055730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.068458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.068472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.081159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.081173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.095056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.095070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.107694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.107708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.120387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.120401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.135275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.135290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.148546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.148559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.536 [2024-10-11 12:07:11.163455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.536 [2024-10-11 12:07:11.163473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.176335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.176350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.187748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.187762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.200701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.200715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.215297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.215311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.228112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.228127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.240870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.240885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.255625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.255640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.268229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.268243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 18924.20 IOPS, 147.85 MiB/s [2024-10-11T10:07:11.429Z] [2024-10-11 12:07:11.280556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.280570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 00:33:26.797 Latency(us) 00:33:26.797 [2024-10-11T10:07:11.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.797 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:26.797 Nvme1n1 : 5.01 18925.10 147.85 0.00 0.00 6757.35 2389.33 12615.68 00:33:26.797 [2024-10-11T10:07:11.429Z] =================================================================================================================== 00:33:26.797 [2024-10-11T10:07:11.429Z] Total : 18925.10 147.85 0.00 0.00 6757.35 2389.33 12615.68 00:33:26.797 [2024-10-11 12:07:11.292233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.292246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.304238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.304253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.316238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.316249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.328236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.328249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.340232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.340242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.352229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.352239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.364230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.364244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.376232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.376241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 [2024-10-11 12:07:11.388229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.797 [2024-10-11 12:07:11.388236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1272869) - No such process 00:33:26.797 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1272869 00:33:26.797 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:26.797 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.798 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.798 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.798 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:26.798 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.798 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.798 delay0 00:33:26.798 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.798 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:26.798 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.798 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:27.058 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.058 12:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:27.058 [2024-10-11 12:07:11.587839] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:33.643 Initializing NVMe Controllers 00:33:33.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:33.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:33.643 Initialization complete. Launching workers. 00:33:33.643 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4442 00:33:33.643 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4718, failed to submit 44 00:33:33.643 success 4520, unsuccessful 198, failed 0 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:33.643 rmmod nvme_tcp 00:33:33.643 rmmod nvme_fabrics 00:33:33.643 rmmod nvme_keyring 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1270703 ']' 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1270703 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1270703 ']' 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1270703 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1270703 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1270703' 00:33:33.643 killing process with pid 1270703 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1270703 00:33:33.643 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1270703 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:33.904 12:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.817 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:35.817 00:33:35.817 real 0m33.785s 00:33:35.817 user 0m42.646s 00:33:35.817 sys 0m12.578s 00:33:35.817 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:35.817 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:35.817 ************************************ 00:33:35.817 END TEST nvmf_zcopy 00:33:35.817 ************************************ 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:36.079 ************************************ 00:33:36.079 START TEST nvmf_nmic 00:33:36.079 ************************************ 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:36.079 * Looking for test storage... 00:33:36.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:36.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.079 --rc genhtml_branch_coverage=1 00:33:36.079 --rc genhtml_function_coverage=1 00:33:36.079 --rc genhtml_legend=1 00:33:36.079 --rc geninfo_all_blocks=1 00:33:36.079 --rc geninfo_unexecuted_blocks=1 00:33:36.079 00:33:36.079 ' 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:36.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.079 --rc genhtml_branch_coverage=1 00:33:36.079 --rc genhtml_function_coverage=1 00:33:36.079 --rc genhtml_legend=1 00:33:36.079 --rc geninfo_all_blocks=1 00:33:36.079 --rc geninfo_unexecuted_blocks=1 00:33:36.079 00:33:36.079 ' 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:36.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.079 --rc genhtml_branch_coverage=1 00:33:36.079 --rc genhtml_function_coverage=1 00:33:36.079 --rc genhtml_legend=1 00:33:36.079 --rc geninfo_all_blocks=1 00:33:36.079 --rc geninfo_unexecuted_blocks=1 00:33:36.079 00:33:36.079 ' 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:36.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:36.079 --rc genhtml_branch_coverage=1 00:33:36.079 --rc genhtml_function_coverage=1 00:33:36.079 --rc genhtml_legend=1 00:33:36.079 --rc geninfo_all_blocks=1 00:33:36.079 --rc geninfo_unexecuted_blocks=1 00:33:36.079 00:33:36.079 ' 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.079 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:36.341 12:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:44.485 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:44.486 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:44.486 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:44.486 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:44.486 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:44.486 12:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:44.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:44.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:33:44.486 00:33:44.486 --- 10.0.0.2 ping statistics --- 00:33:44.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:44.486 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:44.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:44.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:33:44.486 00:33:44.486 --- 10.0.0.1 ping statistics --- 00:33:44.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:44.486 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:44.486 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1279223 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1279223 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1279223 ']' 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:44.487 12:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.487 [2024-10-11 12:07:28.271295] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:44.487 [2024-10-11 12:07:28.272427] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:33:44.487 [2024-10-11 12:07:28.272479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:44.487 [2024-10-11 12:07:28.363105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:44.487 [2024-10-11 12:07:28.419438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:44.487 [2024-10-11 12:07:28.419492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:44.487 [2024-10-11 12:07:28.419501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:44.487 [2024-10-11 12:07:28.419509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:44.487 [2024-10-11 12:07:28.419515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:44.487 [2024-10-11 12:07:28.421596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.487 [2024-10-11 12:07:28.421743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:44.487 [2024-10-11 12:07:28.421822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.487 [2024-10-11 12:07:28.421822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:44.487 [2024-10-11 12:07:28.499545] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:44.487 [2024-10-11 12:07:28.500847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:44.487 [2024-10-11 12:07:28.500948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:44.487 [2024-10-11 12:07:28.501239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:44.487 [2024-10-11 12:07:28.501297] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:44.487 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:44.487 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:33:44.487 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:44.487 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:44.487 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.748 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:44.748 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:44.748 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.749 [2024-10-11 12:07:29.135335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.749 Malloc0 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.749 [2024-10-11 12:07:29.223635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:44.749 test case1: single bdev can't be used in multiple subsystems 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.749 [2024-10-11 12:07:29.258948] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:44.749 [2024-10-11 12:07:29.258978] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:44.749 [2024-10-11 12:07:29.258987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:44.749 request: 00:33:44.749 { 00:33:44.749 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:44.749 "namespace": { 00:33:44.749 "bdev_name": "Malloc0", 00:33:44.749 "no_auto_visible": false 00:33:44.749 }, 00:33:44.749 "method": "nvmf_subsystem_add_ns", 00:33:44.749 "req_id": 1 00:33:44.749 } 00:33:44.749 Got JSON-RPC error response 00:33:44.749 response: 00:33:44.749 { 00:33:44.749 "code": -32602, 00:33:44.749 "message": "Invalid parameters" 00:33:44.749 } 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:44.749 Adding namespace failed - expected result. 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:44.749 test case2: host connect to nvmf target in multiple paths 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:44.749 [2024-10-11 12:07:29.271107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.749 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:45.322 12:07:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:45.583 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:45.583 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:33:45.583 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:45.583 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:45.583 12:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:33:47.500 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:47.500 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:47.500 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:47.500 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:47.500 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:47.500 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:33:47.500 12:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:47.500 [global] 00:33:47.500 thread=1 00:33:47.500 invalidate=1 00:33:47.500 rw=write 00:33:47.500 time_based=1 00:33:47.500 runtime=1 00:33:47.500 ioengine=libaio 00:33:47.500 direct=1 00:33:47.500 bs=4096 00:33:47.500 iodepth=1 00:33:47.500 norandommap=0 00:33:47.500 numjobs=1 00:33:47.500 00:33:47.500 verify_dump=1 00:33:47.500 verify_backlog=512 00:33:47.500 verify_state_save=0 00:33:47.500 do_verify=1 00:33:47.500 verify=crc32c-intel 00:33:47.500 [job0] 00:33:47.500 filename=/dev/nvme0n1 00:33:47.786 Could not set queue depth (nvme0n1) 00:33:48.049 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:48.049 fio-3.35 00:33:48.049 Starting 1 thread 00:33:49.433 00:33:49.433 job0: (groupid=0, jobs=1): err= 0: pid=1280327: Fri Oct 11 12:07:33 2024 00:33:49.433 read: IOPS=19, BW=77.8KiB/s (79.7kB/s)(80.0KiB/1028msec) 00:33:49.433 slat (nsec): min=9998, max=32921, avg=26951.10, stdev=4167.71 00:33:49.433 clat (usec): min=790, max=42080, avg=39736.73, stdev=9174.34 00:33:49.433 lat (usec): min=800, max=42113, avg=39763.68, stdev=9178.33 00:33:49.433 clat percentiles (usec): 00:33:49.433 | 1.00th=[ 791], 5.00th=[ 791], 10.00th=[41157], 20.00th=[41157], 00:33:49.433 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:49.433 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:49.433 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:49.433 | 99.99th=[42206] 00:33:49.433 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:33:49.433 slat (usec): min=9, max=28902, avg=86.20, stdev=1276.06 00:33:49.433 clat (usec): min=143, max=1140, avg=355.70, stdev=100.11 00:33:49.433 lat (usec): min=153, max=29331, avg=441.90, stdev=1283.52 00:33:49.433 clat percentiles (usec): 00:33:49.433 | 1.00th=[ 194], 5.00th=[ 212], 10.00th=[ 235], 20.00th=[ 281], 00:33:49.433 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 330], 60.00th=[ 388], 00:33:49.433 | 70.00th=[ 404], 80.00th=[ 433], 90.00th=[ 494], 95.00th=[ 515], 00:33:49.433 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 1139], 99.95th=[ 1139], 00:33:49.433 | 99.99th=[ 1139] 00:33:49.433 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:49.433 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:49.433 lat (usec) : 250=11.47%, 500=76.32%, 750=8.27%, 1000=0.19% 00:33:49.433 lat (msec) : 2=0.19%, 50=3.57% 00:33:49.433 cpu : usr=0.58%, sys=2.24%, ctx=536, majf=0, minf=1 00:33:49.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.433 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:49.433 00:33:49.433 Run status group 0 (all jobs): 00:33:49.433 READ: bw=77.8KiB/s (79.7kB/s), 77.8KiB/s-77.8KiB/s (79.7kB/s-79.7kB/s), io=80.0KiB (81.9kB), run=1028-1028msec 00:33:49.433 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:33:49.433 00:33:49.433 Disk stats (read/write): 00:33:49.433 nvme0n1: ios=41/512, merge=0/0, ticks=1601/138, in_queue=1739, util=98.70% 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:49.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:49.433 rmmod nvme_tcp 00:33:49.433 rmmod nvme_fabrics 00:33:49.433 rmmod nvme_keyring 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1279223 ']' 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1279223 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1279223 ']' 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1279223 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1279223 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1279223' 00:33:49.433 killing process with pid 1279223 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1279223 00:33:49.433 12:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1279223 00:33:49.694 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:49.694 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:49.694 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:49.694 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:49.694 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:33:49.694 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:49.694 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:33:49.694 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.695 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.695 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.695 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.695 12:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.606 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:51.606 00:33:51.606 real 0m15.686s 00:33:51.606 user 0m36.695s 00:33:51.606 sys 0m7.293s 00:33:51.606 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:51.606 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:51.606 ************************************ 00:33:51.606 END TEST nvmf_nmic 00:33:51.606 ************************************ 00:33:51.606 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:51.606 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:51.606 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:51.606 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:51.867 ************************************ 00:33:51.867 START TEST nvmf_fio_target 00:33:51.867 ************************************ 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:51.867 * Looking for test storage... 00:33:51.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.867 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:51.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.868 --rc genhtml_branch_coverage=1 00:33:51.868 --rc genhtml_function_coverage=1 00:33:51.868 --rc genhtml_legend=1 00:33:51.868 --rc geninfo_all_blocks=1 00:33:51.868 --rc geninfo_unexecuted_blocks=1 00:33:51.868 00:33:51.868 ' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:51.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.868 --rc genhtml_branch_coverage=1 00:33:51.868 --rc genhtml_function_coverage=1 00:33:51.868 --rc genhtml_legend=1 00:33:51.868 --rc geninfo_all_blocks=1 00:33:51.868 --rc geninfo_unexecuted_blocks=1 00:33:51.868 00:33:51.868 ' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:51.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.868 --rc genhtml_branch_coverage=1 00:33:51.868 --rc genhtml_function_coverage=1 00:33:51.868 --rc genhtml_legend=1 00:33:51.868 --rc geninfo_all_blocks=1 00:33:51.868 --rc geninfo_unexecuted_blocks=1 00:33:51.868 00:33:51.868 ' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:51.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.868 --rc genhtml_branch_coverage=1 00:33:51.868 --rc genhtml_function_coverage=1 00:33:51.868 --rc genhtml_legend=1 00:33:51.868 --rc geninfo_all_blocks=1 00:33:51.868 --rc geninfo_unexecuted_blocks=1 00:33:51.868 00:33:51.868 ' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.868 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:52.129 12:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:00.267 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:00.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:00.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:00.268 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:00.268 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:00.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:34:00.268 00:34:00.268 --- 10.0.0.2 ping statistics --- 00:34:00.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.268 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:34:00.268 00:34:00.268 --- 10.0.0.1 ping statistics --- 00:34:00.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.268 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1284745 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1284745 00:34:00.268 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:00.269 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1284745 ']' 00:34:00.269 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.269 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:00.269 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.269 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:00.269 12:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.269 [2024-10-11 12:07:44.019259] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:00.269 [2024-10-11 12:07:44.020396] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:34:00.269 [2024-10-11 12:07:44.020444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:00.269 [2024-10-11 12:07:44.108888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:00.269 [2024-10-11 12:07:44.162327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:00.269 [2024-10-11 12:07:44.162383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:00.269 [2024-10-11 12:07:44.162391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:00.269 [2024-10-11 12:07:44.162398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:00.269 [2024-10-11 12:07:44.162404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:00.269 [2024-10-11 12:07:44.164751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.269 [2024-10-11 12:07:44.164918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:00.269 [2024-10-11 12:07:44.165080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.269 [2024-10-11 12:07:44.165080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:00.269 [2024-10-11 12:07:44.241849] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:00.269 [2024-10-11 12:07:44.242211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:00.269 [2024-10-11 12:07:44.242848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:00.269 [2024-10-11 12:07:44.243394] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:00.269 [2024-10-11 12:07:44.243439] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:00.269 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:00.269 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:34:00.269 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:00.269 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:00.269 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.269 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:00.269 12:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:00.530 [2024-10-11 12:07:45.058029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:00.530 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:00.791 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:00.791 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:01.051 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:01.051 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:01.312 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:01.312 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:01.572 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:01.572 12:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:01.572 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:01.833 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:01.833 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:02.094 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:02.094 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:02.355 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:02.355 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:02.355 12:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:02.660 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:02.660 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:02.660 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:02.660 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:02.953 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.243 [2024-10-11 12:07:47.637945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.243 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:03.538 12:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:03.538 12:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:04.110 12:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:04.110 12:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:34:04.110 12:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:04.110 12:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:34:04.110 12:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:34:04.110 12:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:34:06.023 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:06.023 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:06.023 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:06.023 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:34:06.023 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:06.023 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:34:06.023 12:07:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:06.023 [global] 00:34:06.023 thread=1 00:34:06.023 invalidate=1 00:34:06.023 rw=write 00:34:06.023 time_based=1 00:34:06.023 runtime=1 00:34:06.023 ioengine=libaio 00:34:06.023 direct=1 00:34:06.023 bs=4096 00:34:06.023 iodepth=1 00:34:06.023 norandommap=0 00:34:06.023 numjobs=1 00:34:06.023 00:34:06.023 verify_dump=1 00:34:06.023 verify_backlog=512 00:34:06.023 verify_state_save=0 00:34:06.023 do_verify=1 00:34:06.023 verify=crc32c-intel 00:34:06.023 [job0] 00:34:06.023 filename=/dev/nvme0n1 00:34:06.023 [job1] 00:34:06.023 filename=/dev/nvme0n2 00:34:06.023 [job2] 00:34:06.023 filename=/dev/nvme0n3 00:34:06.023 [job3] 00:34:06.023 filename=/dev/nvme0n4 00:34:06.023 Could not set queue depth (nvme0n1) 00:34:06.023 Could not set queue depth (nvme0n2) 00:34:06.023 Could not set queue depth (nvme0n3) 00:34:06.023 Could not set queue depth (nvme0n4) 00:34:06.603 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:06.603 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:06.603 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:06.603 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:06.603 fio-3.35 00:34:06.603 Starting 4 threads 00:34:07.546 00:34:07.546 job0: (groupid=0, jobs=1): err= 0: pid=1286315: Fri Oct 11 12:07:52 2024 00:34:07.546 read: IOPS=462, BW=1850KiB/s (1895kB/s)(1852KiB/1001msec) 00:34:07.546 slat (nsec): min=9702, max=50545, avg=26159.84, stdev=5324.75 00:34:07.546 clat (usec): min=855, max=1537, avg=1231.29, stdev=112.47 00:34:07.546 lat (usec): min=874, max=1563, avg=1257.45, stdev=112.20 00:34:07.546 clat percentiles (usec): 00:34:07.546 | 1.00th=[ 906], 5.00th=[ 1037], 10.00th=[ 1090], 20.00th=[ 1139], 00:34:07.546 | 30.00th=[ 1188], 40.00th=[ 1205], 50.00th=[ 1237], 60.00th=[ 1270], 00:34:07.546 | 70.00th=[ 1303], 80.00th=[ 1319], 90.00th=[ 1369], 95.00th=[ 1385], 00:34:07.546 | 99.00th=[ 1467], 99.50th=[ 1500], 99.90th=[ 1532], 99.95th=[ 1532], 00:34:07.546 | 99.99th=[ 1532] 00:34:07.546 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:07.546 slat (usec): min=10, max=41413, avg=182.83, stdev=2396.74 00:34:07.546 clat (usec): min=168, max=1074, avg=618.55, stdev=138.47 00:34:07.546 lat (usec): min=203, max=41988, avg=801.38, stdev=2399.39 00:34:07.546 clat percentiles (usec): 00:34:07.546 | 1.00th=[ 310], 5.00th=[ 396], 10.00th=[ 441], 20.00th=[ 498], 00:34:07.546 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 660], 00:34:07.546 | 70.00th=[ 693], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 840], 00:34:07.546 | 99.00th=[ 922], 99.50th=[ 979], 99.90th=[ 1074], 99.95th=[ 1074], 00:34:07.546 | 99.99th=[ 1074] 00:34:07.546 bw ( KiB/s): min= 4096, max= 4096, per=43.20%, avg=4096.00, stdev= 0.00, samples=1 00:34:07.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:07.546 lat (usec) : 250=0.10%, 500=10.87%, 750=32.10%, 1000=11.08% 00:34:07.546 lat (msec) : 2=45.85% 00:34:07.546 cpu : usr=1.80%, sys=2.50%, ctx=978, majf=0, minf=1 00:34:07.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.546 issued rwts: total=463,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:07.546 job1: (groupid=0, jobs=1): err= 0: pid=1286330: Fri Oct 11 12:07:52 2024 00:34:07.546 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:07.546 slat (nsec): min=8758, max=59357, avg=26068.49, stdev=4652.25 00:34:07.546 clat (usec): min=499, max=41355, avg=1152.05, stdev=1784.51 00:34:07.546 lat (usec): min=526, max=41364, avg=1178.12, stdev=1783.71 00:34:07.546 clat percentiles (usec): 00:34:07.546 | 1.00th=[ 783], 5.00th=[ 840], 10.00th=[ 898], 20.00th=[ 979], 00:34:07.546 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:34:07.546 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:34:07.546 | 99.00th=[ 1352], 99.50th=[ 1418], 99.90th=[41157], 99.95th=[41157], 00:34:07.546 | 99.99th=[41157] 00:34:07.546 write: IOPS=584, BW=2338KiB/s (2394kB/s)(2340KiB/1001msec); 0 zone resets 00:34:07.546 slat (usec): min=6, max=1514, avg=29.30, stdev=73.64 00:34:07.546 clat (usec): min=274, max=1012, avg=634.94, stdev=125.26 00:34:07.546 lat (usec): min=284, max=2045, avg=664.24, stdev=149.80 00:34:07.546 clat percentiles (usec): 00:34:07.546 | 1.00th=[ 326], 5.00th=[ 404], 10.00th=[ 478], 20.00th=[ 537], 00:34:07.546 | 30.00th=[ 586], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:34:07.546 | 70.00th=[ 701], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 832], 00:34:07.546 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1012], 99.95th=[ 1012], 00:34:07.546 | 99.99th=[ 1012] 00:34:07.546 bw ( KiB/s): min= 4096, max= 4096, per=43.20%, avg=4096.00, stdev= 0.00, samples=1 00:34:07.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:07.546 lat (usec) : 500=7.20%, 750=36.28%, 1000=20.51% 00:34:07.546 lat (msec) : 2=35.92%, 50=0.09% 00:34:07.546 cpu : usr=1.30%, sys=3.00%, ctx=1101, majf=0, minf=2 00:34:07.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.546 issued rwts: total=512,585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:07.546 job2: (groupid=0, jobs=1): err= 0: pid=1286334: Fri Oct 11 12:07:52 2024 00:34:07.546 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:07.546 slat (nsec): min=8349, max=44249, avg=26559.92, stdev=2518.70 00:34:07.546 clat (usec): min=641, max=41426, avg=1167.23, stdev=1785.80 00:34:07.546 lat (usec): min=649, max=41454, avg=1193.79, stdev=1785.93 00:34:07.546 clat percentiles (usec): 00:34:07.546 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 955], 20.00th=[ 1020], 00:34:07.546 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:34:07.546 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:34:07.546 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[41681], 99.95th=[41681], 00:34:07.546 | 99.99th=[41681] 00:34:07.546 write: IOPS=566, BW=2266KiB/s (2320kB/s)(2268KiB/1001msec); 0 zone resets 00:34:07.546 slat (nsec): min=10050, max=58158, avg=30747.98, stdev=10248.62 00:34:07.546 clat (usec): min=246, max=981, avg=639.47, stdev=114.22 00:34:07.546 lat (usec): min=276, max=1016, avg=670.22, stdev=118.09 00:34:07.546 clat percentiles (usec): 00:34:07.546 | 1.00th=[ 363], 5.00th=[ 420], 10.00th=[ 478], 20.00th=[ 545], 00:34:07.546 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 693], 00:34:07.546 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 791], 00:34:07.546 | 99.00th=[ 848], 99.50th=[ 906], 99.90th=[ 979], 99.95th=[ 979], 00:34:07.546 | 99.99th=[ 979] 00:34:07.546 bw ( KiB/s): min= 4096, max= 4096, per=43.20%, avg=4096.00, stdev= 0.00, samples=1 00:34:07.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:07.546 lat (usec) : 250=0.09%, 500=6.58%, 750=38.00%, 1000=15.94% 00:34:07.546 lat (msec) : 2=39.30%, 50=0.09% 00:34:07.546 cpu : usr=1.90%, sys=3.00%, ctx=1082, majf=0, minf=1 00:34:07.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.546 issued rwts: total=512,567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:07.546 job3: (groupid=0, jobs=1): err= 0: pid=1286335: Fri Oct 11 12:07:52 2024 00:34:07.546 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:07.546 slat (nsec): min=10014, max=52006, avg=27773.35, stdev=4289.83 00:34:07.546 clat (usec): min=686, max=1452, avg=1012.67, stdev=100.61 00:34:07.546 lat (usec): min=697, max=1484, avg=1040.44, stdev=101.17 00:34:07.546 clat percentiles (usec): 00:34:07.546 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 930], 00:34:07.546 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:34:07.546 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:34:07.546 | 99.00th=[ 1237], 99.50th=[ 1336], 99.90th=[ 1450], 99.95th=[ 1450], 00:34:07.546 | 99.99th=[ 1450] 00:34:07.546 write: IOPS=708, BW=2833KiB/s (2901kB/s)(2836KiB/1001msec); 0 zone resets 00:34:07.546 slat (nsec): min=9850, max=71897, avg=32903.76, stdev=10157.01 00:34:07.546 clat (usec): min=266, max=1315, avg=612.34, stdev=121.02 00:34:07.546 lat (usec): min=290, max=1355, avg=645.24, stdev=124.12 00:34:07.546 clat percentiles (usec): 00:34:07.546 | 1.00th=[ 338], 5.00th=[ 404], 10.00th=[ 474], 20.00th=[ 519], 00:34:07.547 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:34:07.547 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 799], 00:34:07.547 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1319], 99.95th=[ 1319], 00:34:07.547 | 99.99th=[ 1319] 00:34:07.547 bw ( KiB/s): min= 4096, max= 4096, per=43.20%, avg=4096.00, stdev= 0.00, samples=1 00:34:07.547 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:07.547 lat (usec) : 500=8.68%, 750=43.98%, 1000=24.16% 00:34:07.547 lat (msec) : 2=23.18% 00:34:07.547 cpu : usr=3.90%, sys=3.70%, ctx=1222, majf=0, minf=1 00:34:07.547 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:07.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:07.547 issued rwts: total=512,709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:07.547 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:07.547 00:34:07.547 Run status group 0 (all jobs): 00:34:07.547 READ: bw=7988KiB/s (8180kB/s), 1850KiB/s-2046KiB/s (1895kB/s-2095kB/s), io=7996KiB (8188kB), run=1001-1001msec 00:34:07.547 WRITE: bw=9483KiB/s (9710kB/s), 2046KiB/s-2833KiB/s (2095kB/s-2901kB/s), io=9492KiB (9720kB), run=1001-1001msec 00:34:07.547 00:34:07.547 Disk stats (read/write): 00:34:07.547 nvme0n1: ios=372/512, merge=0/0, ticks=858/301, in_queue=1159, util=86.27% 00:34:07.547 nvme0n2: ios=451/512, merge=0/0, ticks=617/316, in_queue=933, util=90.91% 00:34:07.547 nvme0n3: ios=446/512, merge=0/0, ticks=577/325, in_queue=902, util=95.13% 00:34:07.547 nvme0n4: ios=493/512, merge=0/0, ticks=1321/254, in_queue=1575, util=94.11% 00:34:07.547 12:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:07.808 [global] 00:34:07.808 thread=1 00:34:07.808 invalidate=1 00:34:07.808 rw=randwrite 00:34:07.808 time_based=1 00:34:07.808 runtime=1 00:34:07.808 ioengine=libaio 00:34:07.808 direct=1 00:34:07.808 bs=4096 00:34:07.808 iodepth=1 00:34:07.808 norandommap=0 00:34:07.808 numjobs=1 00:34:07.808 00:34:07.808 verify_dump=1 00:34:07.808 verify_backlog=512 00:34:07.808 verify_state_save=0 00:34:07.808 do_verify=1 00:34:07.808 verify=crc32c-intel 00:34:07.808 [job0] 00:34:07.808 filename=/dev/nvme0n1 00:34:07.808 [job1] 00:34:07.808 filename=/dev/nvme0n2 00:34:07.808 [job2] 00:34:07.808 filename=/dev/nvme0n3 00:34:07.808 [job3] 00:34:07.808 filename=/dev/nvme0n4 00:34:07.808 Could not set queue depth (nvme0n1) 00:34:07.808 Could not set queue depth (nvme0n2) 00:34:07.808 Could not set queue depth (nvme0n3) 00:34:07.808 Could not set queue depth (nvme0n4) 00:34:08.070 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.070 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.070 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.070 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:08.070 fio-3.35 00:34:08.070 Starting 4 threads 00:34:09.454 00:34:09.454 job0: (groupid=0, jobs=1): err= 0: pid=1286776: Fri Oct 11 12:07:53 2024 00:34:09.454 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1022msec) 00:34:09.454 slat (nsec): min=26026, max=26648, avg=26328.88, stdev=185.27 00:34:09.454 clat (usec): min=1159, max=42040, avg=39203.50, stdev=9814.51 00:34:09.454 lat (usec): min=1186, max=42066, avg=39229.82, stdev=9814.45 00:34:09.454 clat percentiles (usec): 00:34:09.454 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41157], 20.00th=[41157], 00:34:09.454 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:34:09.454 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:09.454 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:09.454 | 99.99th=[42206] 00:34:09.454 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:34:09.454 slat (nsec): min=9518, max=62080, avg=29333.90, stdev=9855.66 00:34:09.454 clat (usec): min=353, max=1320, avg=650.34, stdev=119.33 00:34:09.454 lat (usec): min=365, max=1353, avg=679.68, stdev=123.46 00:34:09.454 clat percentiles (usec): 00:34:09.454 | 1.00th=[ 371], 5.00th=[ 420], 10.00th=[ 494], 20.00th=[ 553], 00:34:09.454 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 685], 00:34:09.454 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 775], 95.00th=[ 816], 00:34:09.454 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 1319], 99.95th=[ 1319], 00:34:09.454 | 99.99th=[ 1319] 00:34:09.454 bw ( KiB/s): min= 4096, max= 4096, per=51.15%, avg=4096.00, stdev= 0.00, samples=1 00:34:09.454 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:09.454 lat (usec) : 500=10.78%, 750=68.62%, 1000=17.01% 00:34:09.454 lat (msec) : 2=0.57%, 50=3.02% 00:34:09.454 cpu : usr=0.49%, sys=1.76%, ctx=532, majf=0, minf=1 00:34:09.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.454 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:09.454 job1: (groupid=0, jobs=1): err= 0: pid=1286792: Fri Oct 11 12:07:53 2024 00:34:09.454 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1023msec) 00:34:09.454 slat (nsec): min=26978, max=32509, avg=27696.12, stdev=1271.93 00:34:09.454 clat (usec): min=1072, max=44114, avg=39182.90, stdev=9853.20 00:34:09.454 lat (usec): min=1100, max=44146, avg=39210.60, stdev=9853.25 00:34:09.454 clat percentiles (usec): 00:34:09.454 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[40633], 20.00th=[41157], 00:34:09.454 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:34:09.454 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[44303], 00:34:09.454 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:34:09.454 | 99.99th=[44303] 00:34:09.454 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:34:09.454 slat (nsec): min=9298, max=57217, avg=30473.71, stdev=9585.37 00:34:09.454 clat (usec): min=286, max=1062, avg=651.05, stdev=118.90 00:34:09.454 lat (usec): min=297, max=1074, avg=681.52, stdev=122.75 00:34:09.454 clat percentiles (usec): 00:34:09.454 | 1.00th=[ 363], 5.00th=[ 424], 10.00th=[ 486], 20.00th=[ 553], 00:34:09.454 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 693], 00:34:09.454 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 816], 00:34:09.454 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:09.454 | 99.99th=[ 1057] 00:34:09.454 bw ( KiB/s): min= 4096, max= 4096, per=51.15%, avg=4096.00, stdev= 0.00, samples=1 00:34:09.454 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:09.454 lat (usec) : 500=11.91%, 750=66.54%, 1000=18.15% 00:34:09.454 lat (msec) : 2=0.38%, 50=3.02% 00:34:09.454 cpu : usr=1.17%, sys=1.86%, ctx=531, majf=0, minf=1 00:34:09.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.454 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:09.454 job2: (groupid=0, jobs=1): err= 0: pid=1286810: Fri Oct 11 12:07:53 2024 00:34:09.454 read: IOPS=16, BW=67.0KiB/s (68.6kB/s)(68.0KiB/1015msec) 00:34:09.454 slat (nsec): min=27282, max=27961, avg=27467.76, stdev=170.08 00:34:09.454 clat (usec): min=1274, max=41998, avg=39234.10, stdev=9791.77 00:34:09.454 lat (usec): min=1301, max=42026, avg=39261.56, stdev=9791.75 00:34:09.454 clat percentiles (usec): 00:34:09.454 | 1.00th=[ 1270], 5.00th=[ 1270], 10.00th=[41157], 20.00th=[41157], 00:34:09.454 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:34:09.454 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:09.454 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:09.454 | 99.99th=[42206] 00:34:09.454 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:34:09.454 slat (nsec): min=9365, max=60723, avg=30003.35, stdev=10407.69 00:34:09.454 clat (usec): min=225, max=964, avg=634.27, stdev=122.43 00:34:09.454 lat (usec): min=235, max=1000, avg=664.27, stdev=127.33 00:34:09.454 clat percentiles (usec): 00:34:09.454 | 1.00th=[ 322], 5.00th=[ 408], 10.00th=[ 474], 20.00th=[ 529], 00:34:09.454 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:34:09.454 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 824], 00:34:09.454 | 99.00th=[ 889], 99.50th=[ 938], 99.90th=[ 963], 99.95th=[ 963], 00:34:09.454 | 99.99th=[ 963] 00:34:09.454 bw ( KiB/s): min= 4096, max= 4096, per=51.15%, avg=4096.00, stdev= 0.00, samples=1 00:34:09.454 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:09.454 lat (usec) : 250=0.38%, 500=13.99%, 750=65.60%, 1000=16.82% 00:34:09.454 lat (msec) : 2=0.19%, 50=3.02% 00:34:09.454 cpu : usr=1.18%, sys=1.87%, ctx=530, majf=0, minf=1 00:34:09.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.454 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:09.455 job3: (groupid=0, jobs=1): err= 0: pid=1286817: Fri Oct 11 12:07:53 2024 00:34:09.455 read: IOPS=15, BW=63.4KiB/s (64.9kB/s)(64.0KiB/1010msec) 00:34:09.455 slat (nsec): min=9988, max=27152, avg=25523.44, stdev=4147.50 00:34:09.455 clat (usec): min=40949, max=42050, avg=41754.25, stdev=396.81 00:34:09.455 lat (usec): min=40975, max=42076, avg=41779.77, stdev=397.26 00:34:09.455 clat percentiles (usec): 00:34:09.455 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:09.455 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:09.455 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:09.455 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:09.455 | 99.99th=[42206] 00:34:09.455 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:34:09.455 slat (nsec): min=9046, max=52339, avg=29506.17, stdev=9141.23 00:34:09.455 clat (usec): min=275, max=960, avg=629.99, stdev=123.60 00:34:09.455 lat (usec): min=289, max=993, avg=659.49, stdev=126.30 00:34:09.455 clat percentiles (usec): 00:34:09.455 | 1.00th=[ 355], 5.00th=[ 412], 10.00th=[ 457], 20.00th=[ 529], 00:34:09.455 | 30.00th=[ 570], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:34:09.455 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 832], 00:34:09.455 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 963], 99.95th=[ 963], 00:34:09.455 | 99.99th=[ 963] 00:34:09.455 bw ( KiB/s): min= 4096, max= 4096, per=51.15%, avg=4096.00, stdev= 0.00, samples=1 00:34:09.455 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:09.455 lat (usec) : 500=15.34%, 750=67.05%, 1000=14.58% 00:34:09.455 lat (msec) : 50=3.03% 00:34:09.455 cpu : usr=1.49%, sys=1.49%, ctx=528, majf=0, minf=2 00:34:09.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.455 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:09.455 00:34:09.455 Run status group 0 (all jobs): 00:34:09.455 READ: bw=262KiB/s (268kB/s), 63.4KiB/s-67.0KiB/s (64.9kB/s-68.6kB/s), io=268KiB (274kB), run=1010-1023msec 00:34:09.455 WRITE: bw=8008KiB/s (8200kB/s), 2002KiB/s-2028KiB/s (2050kB/s-2076kB/s), io=8192KiB (8389kB), run=1010-1023msec 00:34:09.455 00:34:09.455 Disk stats (read/write): 00:34:09.455 nvme0n1: ios=55/512, merge=0/0, ticks=590/325, in_queue=915, util=90.28% 00:34:09.455 nvme0n2: ios=45/512, merge=0/0, ticks=1366/274, in_queue=1640, util=96.43% 00:34:09.455 nvme0n3: ios=34/512, merge=0/0, ticks=1337/273, in_queue=1610, util=92.72% 00:34:09.455 nvme0n4: ios=68/512, merge=0/0, ticks=571/270, in_queue=841, util=95.41% 00:34:09.455 12:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:09.455 [global] 00:34:09.455 thread=1 00:34:09.455 invalidate=1 00:34:09.455 rw=write 00:34:09.455 time_based=1 00:34:09.455 runtime=1 00:34:09.455 ioengine=libaio 00:34:09.455 direct=1 00:34:09.455 bs=4096 00:34:09.455 iodepth=128 00:34:09.455 norandommap=0 00:34:09.455 numjobs=1 00:34:09.455 00:34:09.455 verify_dump=1 00:34:09.455 verify_backlog=512 00:34:09.455 verify_state_save=0 00:34:09.455 do_verify=1 00:34:09.455 verify=crc32c-intel 00:34:09.455 [job0] 00:34:09.455 filename=/dev/nvme0n1 00:34:09.455 [job1] 00:34:09.455 filename=/dev/nvme0n2 00:34:09.455 [job2] 00:34:09.455 filename=/dev/nvme0n3 00:34:09.455 [job3] 00:34:09.455 filename=/dev/nvme0n4 00:34:09.455 Could not set queue depth (nvme0n1) 00:34:09.455 Could not set queue depth (nvme0n2) 00:34:09.455 Could not set queue depth (nvme0n3) 00:34:09.455 Could not set queue depth (nvme0n4) 00:34:09.715 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:09.715 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:09.715 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:09.715 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:09.715 fio-3.35 00:34:09.715 Starting 4 threads 00:34:11.097 00:34:11.097 job0: (groupid=0, jobs=1): err= 0: pid=1287230: Fri Oct 11 12:07:55 2024 00:34:11.097 read: IOPS=4631, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1009msec) 00:34:11.097 slat (nsec): min=925, max=10387k, avg=97488.09, stdev=594137.68 00:34:11.097 clat (usec): min=1190, max=46969, avg=13635.01, stdev=7866.27 00:34:11.097 lat (usec): min=4314, max=46977, avg=13732.50, stdev=7883.60 00:34:11.097 clat percentiles (usec): 00:34:11.097 | 1.00th=[ 5800], 5.00th=[ 7308], 10.00th=[ 7963], 20.00th=[ 8586], 00:34:11.097 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11076], 60.00th=[12256], 00:34:11.097 | 70.00th=[13173], 80.00th=[16057], 90.00th=[23200], 95.00th=[31327], 00:34:11.097 | 99.00th=[45876], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:34:11.097 | 99.99th=[46924] 00:34:11.097 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:34:11.097 slat (nsec): min=1615, max=17256k, avg=102532.33, stdev=675990.73 00:34:11.097 clat (usec): min=4007, max=38887, avg=12472.69, stdev=6879.55 00:34:11.097 lat (usec): min=4016, max=44401, avg=12575.22, stdev=6920.86 00:34:11.097 clat percentiles (usec): 00:34:11.097 | 1.00th=[ 4752], 5.00th=[ 6325], 10.00th=[ 7373], 20.00th=[ 7963], 00:34:11.097 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10421], 00:34:11.097 | 70.00th=[12256], 80.00th=[16057], 90.00th=[24249], 95.00th=[27919], 00:34:11.097 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:34:11.097 | 99.99th=[39060] 00:34:11.097 bw ( KiB/s): min=15880, max=24576, per=22.83%, avg=20228.00, stdev=6149.00, samples=2 00:34:11.097 iops : min= 3970, max= 6144, avg=5057.00, stdev=1537.25, samples=2 00:34:11.097 lat (msec) : 2=0.01%, 10=45.23%, 20=40.80%, 50=13.96% 00:34:11.097 cpu : usr=3.17%, sys=5.16%, ctx=399, majf=0, minf=1 00:34:11.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:11.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:11.097 issued rwts: total=4673,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:11.097 job1: (groupid=0, jobs=1): err= 0: pid=1287231: Fri Oct 11 12:07:55 2024 00:34:11.097 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:34:11.097 slat (nsec): min=931, max=45309k, avg=102427.86, stdev=1002291.04 00:34:11.097 clat (usec): min=2512, max=57941, avg=13567.61, stdev=10655.78 00:34:11.097 lat (usec): min=2526, max=57971, avg=13670.03, stdev=10700.53 00:34:11.097 clat percentiles (usec): 00:34:11.097 | 1.00th=[ 3720], 5.00th=[ 4883], 10.00th=[ 5735], 20.00th=[ 6259], 00:34:11.097 | 30.00th=[ 7373], 40.00th=[ 9241], 50.00th=[11469], 60.00th=[13435], 00:34:11.097 | 70.00th=[14746], 80.00th=[15664], 90.00th=[22152], 95.00th=[26346], 00:34:11.097 | 99.00th=[55313], 99.50th=[55313], 99.90th=[55313], 99.95th=[55837], 00:34:11.097 | 99.99th=[57934] 00:34:11.097 write: IOPS=5435, BW=21.2MiB/s (22.3MB/s)(21.3MiB/1004msec); 0 zone resets 00:34:11.098 slat (nsec): min=1592, max=44094k, avg=78701.35, stdev=747079.25 00:34:11.098 clat (usec): min=1383, max=63261, avg=10374.93, stdev=7626.98 00:34:11.098 lat (usec): min=1520, max=63271, avg=10453.63, stdev=7664.36 00:34:11.098 clat percentiles (usec): 00:34:11.098 | 1.00th=[ 3130], 5.00th=[ 3982], 10.00th=[ 4883], 20.00th=[ 6259], 00:34:11.098 | 30.00th=[ 6849], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10552], 00:34:11.098 | 70.00th=[11207], 80.00th=[11994], 90.00th=[14091], 95.00th=[15664], 00:34:11.098 | 99.00th=[63177], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:34:11.098 | 99.99th=[63177] 00:34:11.098 bw ( KiB/s): min=20480, max=22160, per=24.06%, avg=21320.00, stdev=1187.94, samples=2 00:34:11.098 iops : min= 5120, max= 5540, avg=5330.00, stdev=296.98, samples=2 00:34:11.098 lat (msec) : 2=0.01%, 4=3.55%, 10=44.09%, 20=44.78%, 50=4.57% 00:34:11.098 lat (msec) : 100=3.01% 00:34:11.098 cpu : usr=4.09%, sys=5.68%, ctx=375, majf=0, minf=1 00:34:11.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:11.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:11.098 issued rwts: total=5120,5457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:11.098 job2: (groupid=0, jobs=1): err= 0: pid=1287248: Fri Oct 11 12:07:55 2024 00:34:11.098 read: IOPS=6321, BW=24.7MiB/s (25.9MB/s)(24.7MiB/1002msec) 00:34:11.098 slat (nsec): min=960, max=3991.3k, avg=79627.90, stdev=397290.66 00:34:11.098 clat (usec): min=1392, max=20253, avg=10420.34, stdev=3074.94 00:34:11.098 lat (usec): min=2120, max=20280, avg=10499.96, stdev=3107.43 00:34:11.098 clat percentiles (usec): 00:34:11.098 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7570], 00:34:11.098 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10159], 60.00th=[10945], 00:34:11.098 | 70.00th=[11863], 80.00th=[13304], 90.00th=[14877], 95.00th=[15926], 00:34:11.098 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19792], 99.95th=[20055], 00:34:11.098 | 99.99th=[20317] 00:34:11.098 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:34:11.098 slat (nsec): min=1621, max=7821.1k, avg=69645.93, stdev=381363.02 00:34:11.098 clat (usec): min=1566, max=19376, avg=9112.97, stdev=2483.69 00:34:11.098 lat (usec): min=1578, max=19411, avg=9182.61, stdev=2515.31 00:34:11.098 clat percentiles (usec): 00:34:11.098 | 1.00th=[ 3720], 5.00th=[ 5342], 10.00th=[ 6259], 20.00th=[ 6915], 00:34:11.098 | 30.00th=[ 7439], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9503], 00:34:11.098 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12256], 95.00th=[13566], 00:34:11.098 | 99.00th=[16057], 99.50th=[16581], 99.90th=[16581], 99.95th=[16581], 00:34:11.098 | 99.99th=[19268] 00:34:11.098 bw ( KiB/s): min=24576, max=28672, per=30.04%, avg=26624.00, stdev=2896.31, samples=2 00:34:11.098 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:34:11.098 lat (msec) : 2=0.10%, 4=0.79%, 10=57.06%, 20=42.02%, 50=0.03% 00:34:11.098 cpu : usr=4.80%, sys=5.89%, ctx=582, majf=0, minf=2 00:34:11.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:11.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:11.098 issued rwts: total=6334,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:11.098 job3: (groupid=0, jobs=1): err= 0: pid=1287255: Fri Oct 11 12:07:55 2024 00:34:11.098 read: IOPS=4696, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1008msec) 00:34:11.098 slat (nsec): min=947, max=7535.8k, avg=72653.80, stdev=510325.31 00:34:11.098 clat (usec): min=2392, max=24011, avg=8960.45, stdev=2835.07 00:34:11.098 lat (usec): min=3607, max=24013, avg=9033.10, stdev=2864.79 00:34:11.098 clat percentiles (usec): 00:34:11.098 | 1.00th=[ 4293], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 6980], 00:34:11.098 | 30.00th=[ 7635], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:34:11.098 | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[12125], 95.00th=[15008], 00:34:11.098 | 99.00th=[19530], 99.50th=[21103], 99.90th=[23987], 99.95th=[23987], 00:34:11.098 | 99.99th=[23987] 00:34:11.098 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:34:11.098 slat (nsec): min=1685, max=47976k, avg=122444.04, stdev=1551181.29 00:34:11.098 clat (usec): min=834, max=251827, avg=12510.67, stdev=15761.04 00:34:11.098 lat (usec): min=843, max=251837, avg=12633.11, stdev=16117.91 00:34:11.098 clat percentiles (msec): 00:34:11.098 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:34:11.098 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 11], 60.00th=[ 12], 00:34:11.098 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 17], 95.00th=[ 22], 00:34:11.098 | 99.00th=[ 86], 99.50th=[ 134], 99.90th=[ 218], 99.95th=[ 253], 00:34:11.098 | 99.99th=[ 253] 00:34:11.098 bw ( KiB/s): min=14064, max=26880, per=23.10%, avg=20472.00, stdev=9062.28, samples=2 00:34:11.098 iops : min= 3516, max= 6720, avg=5118.00, stdev=2265.57, samples=2 00:34:11.098 lat (usec) : 1000=0.03% 00:34:11.098 lat (msec) : 2=0.07%, 4=1.23%, 10=61.17%, 20=34.05%, 50=2.80% 00:34:11.098 lat (msec) : 100=0.32%, 250=0.29%, 500=0.03% 00:34:11.098 cpu : usr=3.28%, sys=4.17%, ctx=514, majf=0, minf=1 00:34:11.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:11.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:11.098 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:11.098 00:34:11.098 Run status group 0 (all jobs): 00:34:11.098 READ: bw=80.8MiB/s (84.7MB/s), 18.1MiB/s-24.7MiB/s (19.0MB/s-25.9MB/s), io=81.5MiB (85.4MB), run=1002-1009msec 00:34:11.098 WRITE: bw=86.5MiB/s (90.7MB/s), 19.8MiB/s-25.9MiB/s (20.8MB/s-27.2MB/s), io=87.3MiB (91.6MB), run=1002-1009msec 00:34:11.098 00:34:11.098 Disk stats (read/write): 00:34:11.098 nvme0n1: ios=4152/4160, merge=0/0, ticks=13707/14300, in_queue=28007, util=95.69% 00:34:11.098 nvme0n2: ios=4180/4608, merge=0/0, ticks=23919/17882, in_queue=41801, util=97.96% 00:34:11.098 nvme0n3: ios=5423/5632, merge=0/0, ticks=20841/18052, in_queue=38893, util=97.05% 00:34:11.098 nvme0n4: ios=3747/4096, merge=0/0, ticks=31685/33389, in_queue=65074, util=99.25% 00:34:11.098 12:07:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:11.098 [global] 00:34:11.098 thread=1 00:34:11.098 invalidate=1 00:34:11.098 rw=randwrite 00:34:11.098 time_based=1 00:34:11.098 runtime=1 00:34:11.098 ioengine=libaio 00:34:11.098 direct=1 00:34:11.098 bs=4096 00:34:11.098 iodepth=128 00:34:11.098 norandommap=0 00:34:11.098 numjobs=1 00:34:11.098 00:34:11.098 verify_dump=1 00:34:11.098 verify_backlog=512 00:34:11.098 verify_state_save=0 00:34:11.098 do_verify=1 00:34:11.098 verify=crc32c-intel 00:34:11.098 [job0] 00:34:11.098 filename=/dev/nvme0n1 00:34:11.098 [job1] 00:34:11.098 filename=/dev/nvme0n2 00:34:11.098 [job2] 00:34:11.098 filename=/dev/nvme0n3 00:34:11.098 [job3] 00:34:11.098 filename=/dev/nvme0n4 00:34:11.098 Could not set queue depth (nvme0n1) 00:34:11.098 Could not set queue depth (nvme0n2) 00:34:11.098 Could not set queue depth (nvme0n3) 00:34:11.098 Could not set queue depth (nvme0n4) 00:34:11.358 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:11.358 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:11.358 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:11.358 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:11.358 fio-3.35 00:34:11.358 Starting 4 threads 00:34:12.743 00:34:12.743 job0: (groupid=0, jobs=1): err= 0: pid=1287658: Fri Oct 11 12:07:57 2024 00:34:12.743 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:34:12.743 slat (nsec): min=930, max=8440.7k, avg=71624.46, stdev=533514.85 00:34:12.743 clat (usec): min=2264, max=33486, avg=9715.64, stdev=3540.12 00:34:12.743 lat (usec): min=2270, max=33494, avg=9787.26, stdev=3578.04 00:34:12.743 clat percentiles (usec): 00:34:12.743 | 1.00th=[ 3818], 5.00th=[ 5538], 10.00th=[ 6456], 20.00th=[ 7373], 00:34:12.743 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[10028], 00:34:12.743 | 70.00th=[10290], 80.00th=[11338], 90.00th=[13960], 95.00th=[16319], 00:34:12.743 | 99.00th=[22938], 99.50th=[28443], 99.90th=[32900], 99.95th=[32900], 00:34:12.743 | 99.99th=[33424] 00:34:12.743 write: IOPS=6426, BW=25.1MiB/s (26.3MB/s)(25.2MiB/1003msec); 0 zone resets 00:34:12.743 slat (nsec): min=1603, max=8437.4k, avg=70832.72, stdev=455963.31 00:34:12.743 clat (usec): min=880, max=32251, avg=10458.68, stdev=5825.46 00:34:12.743 lat (usec): min=887, max=32255, avg=10529.52, stdev=5861.03 00:34:12.743 clat percentiles (usec): 00:34:12.743 | 1.00th=[ 1254], 5.00th=[ 3032], 10.00th=[ 4555], 20.00th=[ 5669], 00:34:12.743 | 30.00th=[ 6849], 40.00th=[ 7898], 50.00th=[ 9765], 60.00th=[10814], 00:34:12.743 | 70.00th=[12256], 80.00th=[13829], 90.00th=[17957], 95.00th=[23725], 00:34:12.743 | 99.00th=[29230], 99.50th=[29754], 99.90th=[32113], 99.95th=[32113], 00:34:12.743 | 99.99th=[32375] 00:34:12.743 bw ( KiB/s): min=24576, max=25976, per=23.66%, avg=25276.00, stdev=989.95, samples=2 00:34:12.743 iops : min= 6144, max= 6494, avg=6319.00, stdev=247.49, samples=2 00:34:12.743 lat (usec) : 1000=0.06% 00:34:12.743 lat (msec) : 2=1.06%, 4=3.81%, 10=51.41%, 20=38.40%, 50=5.26% 00:34:12.743 cpu : usr=4.29%, sys=6.99%, ctx=491, majf=0, minf=2 00:34:12.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:12.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:12.743 issued rwts: total=6144,6446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.743 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:12.743 job1: (groupid=0, jobs=1): err= 0: pid=1287672: Fri Oct 11 12:07:57 2024 00:34:12.743 read: IOPS=5157, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1005msec) 00:34:12.743 slat (nsec): min=889, max=10005k, avg=83358.58, stdev=536276.46 00:34:12.743 clat (usec): min=1229, max=28891, avg=10930.09, stdev=3932.54 00:34:12.743 lat (usec): min=4366, max=28919, avg=11013.44, stdev=3969.76 00:34:12.743 clat percentiles (usec): 00:34:12.743 | 1.00th=[ 5669], 5.00th=[ 6652], 10.00th=[ 7242], 20.00th=[ 7963], 00:34:12.743 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10028], 00:34:12.743 | 70.00th=[11600], 80.00th=[13960], 90.00th=[17695], 95.00th=[19268], 00:34:12.743 | 99.00th=[21103], 99.50th=[23200], 99.90th=[24249], 99.95th=[25560], 00:34:12.743 | 99.99th=[28967] 00:34:12.744 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:34:12.744 slat (nsec): min=1494, max=17158k, avg=96718.38, stdev=617408.59 00:34:12.744 clat (usec): min=4657, max=36259, avg=12537.46, stdev=6122.66 00:34:12.744 lat (usec): min=4661, max=36261, avg=12634.18, stdev=6176.14 00:34:12.744 clat percentiles (usec): 00:34:12.744 | 1.00th=[ 5538], 5.00th=[ 6783], 10.00th=[ 7242], 20.00th=[ 7963], 00:34:12.744 | 30.00th=[ 8225], 40.00th=[ 8717], 50.00th=[10552], 60.00th=[12518], 00:34:12.744 | 70.00th=[13960], 80.00th=[15664], 90.00th=[20317], 95.00th=[24249], 00:34:12.744 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:34:12.744 | 99.99th=[36439] 00:34:12.744 bw ( KiB/s): min=20480, max=24056, per=20.84%, avg=22268.00, stdev=2528.61, samples=2 00:34:12.744 iops : min= 5120, max= 6014, avg=5567.00, stdev=632.15, samples=2 00:34:12.744 lat (msec) : 2=0.01%, 10=52.32%, 20=39.60%, 50=8.07% 00:34:12.744 cpu : usr=3.78%, sys=4.88%, ctx=595, majf=0, minf=2 00:34:12.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:12.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:12.744 issued rwts: total=5183,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:12.744 job2: (groupid=0, jobs=1): err= 0: pid=1287690: Fri Oct 11 12:07:57 2024 00:34:12.744 read: IOPS=6654, BW=26.0MiB/s (27.3MB/s)(26.1MiB/1003msec) 00:34:12.744 slat (nsec): min=934, max=11419k, avg=70640.56, stdev=471167.55 00:34:12.744 clat (usec): min=2202, max=28005, avg=9412.01, stdev=2904.67 00:34:12.744 lat (usec): min=2206, max=29131, avg=9482.65, stdev=2923.58 00:34:12.744 clat percentiles (usec): 00:34:12.744 | 1.00th=[ 4686], 5.00th=[ 5866], 10.00th=[ 6915], 20.00th=[ 7832], 00:34:12.744 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:34:12.744 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[12125], 95.00th=[15926], 00:34:12.744 | 99.00th=[20055], 99.50th=[22152], 99.90th=[27132], 99.95th=[27132], 00:34:12.744 | 99.99th=[27919] 00:34:12.744 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:34:12.744 slat (nsec): min=1510, max=11073k, avg=68844.69, stdev=446136.54 00:34:12.744 clat (usec): min=2788, max=25446, avg=8952.35, stdev=2777.41 00:34:12.744 lat (usec): min=2799, max=25454, avg=9021.19, stdev=2801.56 00:34:12.744 clat percentiles (usec): 00:34:12.744 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7504], 00:34:12.744 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:34:12.744 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11994], 00:34:12.744 | 99.00th=[23462], 99.50th=[25035], 99.90th=[25297], 99.95th=[25560], 00:34:12.744 | 99.99th=[25560] 00:34:12.744 bw ( KiB/s): min=27800, max=28672, per=26.43%, avg=28236.00, stdev=616.60, samples=2 00:34:12.744 iops : min= 6950, max= 7168, avg=7059.00, stdev=154.15, samples=2 00:34:12.744 lat (msec) : 4=0.51%, 10=77.39%, 20=20.41%, 50=1.68% 00:34:12.744 cpu : usr=5.19%, sys=5.69%, ctx=527, majf=0, minf=1 00:34:12.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:12.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:12.744 issued rwts: total=6674,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:12.744 job3: (groupid=0, jobs=1): err= 0: pid=1287696: Fri Oct 11 12:07:57 2024 00:34:12.744 read: IOPS=7227, BW=28.2MiB/s (29.6MB/s)(28.5MiB/1008msec) 00:34:12.744 slat (nsec): min=934, max=10408k, avg=64768.79, stdev=460238.64 00:34:12.744 clat (usec): min=2555, max=30819, avg=8850.81, stdev=2896.27 00:34:12.744 lat (usec): min=2559, max=30847, avg=8915.58, stdev=2921.64 00:34:12.744 clat percentiles (usec): 00:34:12.744 | 1.00th=[ 5211], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6980], 00:34:12.744 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8291], 60.00th=[ 8979], 00:34:12.744 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11600], 95.00th=[12649], 00:34:12.744 | 99.00th=[25297], 99.50th=[27657], 99.90th=[28181], 99.95th=[28181], 00:34:12.744 | 99.99th=[30802] 00:34:12.744 write: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec); 0 zone resets 00:34:12.744 slat (nsec): min=1536, max=9536.8k, avg=63073.68, stdev=431366.04 00:34:12.744 clat (usec): min=1202, max=34941, avg=8244.44, stdev=2636.13 00:34:12.744 lat (usec): min=1213, max=34943, avg=8307.51, stdev=2645.21 00:34:12.744 clat percentiles (usec): 00:34:12.744 | 1.00th=[ 4490], 5.00th=[ 5014], 10.00th=[ 5473], 20.00th=[ 6128], 00:34:12.744 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 7898], 60.00th=[ 8455], 00:34:12.744 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[10683], 95.00th=[11731], 00:34:12.744 | 99.00th=[17695], 99.50th=[20841], 99.90th=[29492], 99.95th=[31065], 00:34:12.744 | 99.99th=[34866] 00:34:12.744 bw ( KiB/s): min=28592, max=32768, per=28.71%, avg=30680.00, stdev=2952.88, samples=2 00:34:12.744 iops : min= 7148, max= 8192, avg=7670.00, stdev=738.22, samples=2 00:34:12.744 lat (msec) : 2=0.06%, 4=0.39%, 10=76.36%, 20=22.30%, 50=0.90% 00:34:12.744 cpu : usr=5.16%, sys=6.95%, ctx=548, majf=0, minf=2 00:34:12.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:12.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:12.744 issued rwts: total=7285,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:12.744 00:34:12.744 Run status group 0 (all jobs): 00:34:12.744 READ: bw=98.0MiB/s (103MB/s), 20.1MiB/s-28.2MiB/s (21.1MB/s-29.6MB/s), io=98.8MiB (104MB), run=1003-1008msec 00:34:12.744 WRITE: bw=104MiB/s (109MB/s), 21.9MiB/s-29.8MiB/s (23.0MB/s-31.2MB/s), io=105MiB (110MB), run=1003-1008msec 00:34:12.744 00:34:12.744 Disk stats (read/write): 00:34:12.744 nvme0n1: ios=4902/5120, merge=0/0, ticks=42804/49789, in_queue=92593, util=89.58% 00:34:12.744 nvme0n2: ios=4051/4096, merge=0/0, ticks=22846/27371, in_queue=50217, util=91.85% 00:34:12.744 nvme0n3: ios=5632/6036, merge=0/0, ticks=24960/23694, in_queue=48654, util=88.40% 00:34:12.744 nvme0n4: ios=6473/6656, merge=0/0, ticks=48221/47857, in_queue=96078, util=95.83% 00:34:12.744 12:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:12.744 12:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1287920 00:34:12.744 12:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:12.744 12:07:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:12.744 [global] 00:34:12.744 thread=1 00:34:12.744 invalidate=1 00:34:12.744 rw=read 00:34:12.744 time_based=1 00:34:12.744 runtime=10 00:34:12.744 ioengine=libaio 00:34:12.744 direct=1 00:34:12.744 bs=4096 00:34:12.744 iodepth=1 00:34:12.744 norandommap=1 00:34:12.744 numjobs=1 00:34:12.744 00:34:12.744 [job0] 00:34:12.744 filename=/dev/nvme0n1 00:34:12.744 [job1] 00:34:12.744 filename=/dev/nvme0n2 00:34:12.744 [job2] 00:34:12.744 filename=/dev/nvme0n3 00:34:12.744 [job3] 00:34:12.744 filename=/dev/nvme0n4 00:34:12.744 Could not set queue depth (nvme0n1) 00:34:12.744 Could not set queue depth (nvme0n2) 00:34:12.744 Could not set queue depth (nvme0n3) 00:34:12.744 Could not set queue depth (nvme0n4) 00:34:13.005 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:13.005 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:13.005 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:13.005 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:13.005 fio-3.35 00:34:13.005 Starting 4 threads 00:34:15.546 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:15.806 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=14086144, buflen=4096 00:34:15.806 fio: pid=1288133, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:15.806 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:16.066 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=16703488, buflen=4096 00:34:16.066 fio: pid=1288132, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:16.066 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:16.066 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:16.066 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10264576, buflen=4096 00:34:16.066 fio: pid=1288124, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:16.327 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:16.327 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:16.327 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:16.327 12:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:16.327 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12791808, buflen=4096 00:34:16.327 fio: pid=1288125, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:16.327 00:34:16.327 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1288124: Fri Oct 11 12:08:00 2024 00:34:16.327 read: IOPS=849, BW=3396KiB/s (3477kB/s)(9.79MiB/2952msec) 00:34:16.327 slat (usec): min=6, max=17624, avg=46.49, stdev=560.88 00:34:16.327 clat (usec): min=401, max=6271, avg=1124.83, stdev=210.28 00:34:16.327 lat (usec): min=426, max=18597, avg=1171.34, stdev=592.66 00:34:16.327 clat percentiles (usec): 00:34:16.327 | 1.00th=[ 562], 5.00th=[ 742], 10.00th=[ 857], 20.00th=[ 996], 00:34:16.327 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1188], 00:34:16.327 | 70.00th=[ 1237], 80.00th=[ 1270], 90.00th=[ 1319], 95.00th=[ 1352], 00:34:16.327 | 99.00th=[ 1418], 99.50th=[ 1450], 99.90th=[ 1483], 99.95th=[ 1516], 00:34:16.327 | 99.99th=[ 6259] 00:34:16.327 bw ( KiB/s): min= 3264, max= 3568, per=20.10%, avg=3348.80, stdev=127.32, samples=5 00:34:16.327 iops : min= 816, max= 892, avg=837.20, stdev=31.83, samples=5 00:34:16.327 lat (usec) : 500=0.36%, 750=5.07%, 1000=15.24% 00:34:16.327 lat (msec) : 2=79.26%, 10=0.04% 00:34:16.327 cpu : usr=0.91%, sys=2.51%, ctx=2512, majf=0, minf=2 00:34:16.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.327 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.327 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:16.327 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1288125: Fri Oct 11 12:08:00 2024 00:34:16.327 read: IOPS=989, BW=3957KiB/s (4052kB/s)(12.2MiB/3157msec) 00:34:16.327 slat (usec): min=6, max=15255, avg=44.95, stdev=482.59 00:34:16.327 clat (usec): min=165, max=42018, avg=959.68, stdev=2375.20 00:34:16.327 lat (usec): min=173, max=42043, avg=1004.64, stdev=2421.78 00:34:16.327 clat percentiles (usec): 00:34:16.327 | 1.00th=[ 318], 5.00th=[ 420], 10.00th=[ 474], 20.00th=[ 578], 00:34:16.327 | 30.00th=[ 668], 40.00th=[ 742], 50.00th=[ 807], 60.00th=[ 865], 00:34:16.327 | 70.00th=[ 996], 80.00th=[ 1090], 90.00th=[ 1188], 95.00th=[ 1237], 00:34:16.327 | 99.00th=[ 1336], 99.50th=[ 1434], 99.90th=[41681], 99.95th=[42206], 00:34:16.327 | 99.99th=[42206] 00:34:16.327 bw ( KiB/s): min= 2248, max= 5832, per=24.23%, avg=4035.33, stdev=1164.70, samples=6 00:34:16.327 iops : min= 562, max= 1458, avg=1008.83, stdev=291.18, samples=6 00:34:16.327 lat (usec) : 250=0.26%, 500=12.61%, 750=28.04%, 1000=29.58% 00:34:16.327 lat (msec) : 2=29.13%, 50=0.35% 00:34:16.327 cpu : usr=1.05%, sys=3.45%, ctx=3131, majf=0, minf=1 00:34:16.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.328 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.328 issued rwts: total=3124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:16.328 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1288132: Fri Oct 11 12:08:00 2024 00:34:16.328 read: IOPS=1474, BW=5895KiB/s (6037kB/s)(15.9MiB/2767msec) 00:34:16.328 slat (usec): min=6, max=21612, avg=33.70, stdev=421.93 00:34:16.328 clat (usec): min=164, max=41472, avg=638.64, stdev=1116.74 00:34:16.328 lat (usec): min=172, max=41498, avg=672.35, stdev=1195.44 00:34:16.328 clat percentiles (usec): 00:34:16.328 | 1.00th=[ 227], 5.00th=[ 297], 10.00th=[ 355], 20.00th=[ 453], 00:34:16.328 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 644], 00:34:16.328 | 70.00th=[ 717], 80.00th=[ 791], 90.00th=[ 857], 95.00th=[ 889], 00:34:16.328 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1696], 99.95th=[41157], 00:34:16.328 | 99.99th=[41681] 00:34:16.328 bw ( KiB/s): min= 5240, max= 6936, per=36.02%, avg=6000.00, stdev=662.31, samples=5 00:34:16.328 iops : min= 1310, max= 1734, avg=1500.00, stdev=165.58, samples=5 00:34:16.328 lat (usec) : 250=1.42%, 500=25.10%, 750=47.44%, 1000=25.67% 00:34:16.328 lat (msec) : 2=0.25%, 4=0.02%, 50=0.07% 00:34:16.328 cpu : usr=1.45%, sys=4.27%, ctx=4082, majf=0, minf=1 00:34:16.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.328 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.328 issued rwts: total=4079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:16.328 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1288133: Fri Oct 11 12:08:00 2024 00:34:16.328 read: IOPS=1334, BW=5336KiB/s (5464kB/s)(13.4MiB/2578msec) 00:34:16.328 slat (nsec): min=6372, max=62414, avg=25759.69, stdev=5576.52 00:34:16.328 clat (usec): min=203, max=41468, avg=717.62, stdev=1007.16 00:34:16.328 lat (usec): min=210, max=41508, avg=743.38, stdev=1007.47 00:34:16.328 clat percentiles (usec): 00:34:16.328 | 1.00th=[ 306], 5.00th=[ 375], 10.00th=[ 433], 20.00th=[ 498], 00:34:16.328 | 30.00th=[ 553], 40.00th=[ 603], 50.00th=[ 660], 60.00th=[ 717], 00:34:16.328 | 70.00th=[ 799], 80.00th=[ 881], 90.00th=[ 1020], 95.00th=[ 1156], 00:34:16.328 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 1975], 99.95th=[41157], 00:34:16.328 | 99.99th=[41681] 00:34:16.328 bw ( KiB/s): min= 2976, max= 6304, per=32.42%, avg=5400.00, stdev=1373.49, samples=5 00:34:16.328 iops : min= 744, max= 1576, avg=1350.00, stdev=343.37, samples=5 00:34:16.328 lat (usec) : 250=0.35%, 500=19.97%, 750=43.92%, 1000=25.06% 00:34:16.328 lat (msec) : 2=10.58%, 4=0.03%, 50=0.06% 00:34:16.328 cpu : usr=2.02%, sys=5.12%, ctx=3440, majf=0, minf=2 00:34:16.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:16.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.328 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.328 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:16.328 00:34:16.328 Run status group 0 (all jobs): 00:34:16.328 READ: bw=16.3MiB/s (17.1MB/s), 3396KiB/s-5895KiB/s (3477kB/s-6037kB/s), io=51.4MiB (53.8MB), run=2578-3157msec 00:34:16.328 00:34:16.328 Disk stats (read/write): 00:34:16.328 nvme0n1: ios=2384/0, merge=0/0, ticks=2661/0, in_queue=2661, util=93.26% 00:34:16.328 nvme0n2: ios=3121/0, merge=0/0, ticks=2679/0, in_queue=2679, util=93.74% 00:34:16.328 nvme0n3: ios=3866/0, merge=0/0, ticks=2387/0, in_queue=2387, util=95.99% 00:34:16.328 nvme0n4: ios=3233/0, merge=0/0, ticks=1945/0, in_queue=1945, util=96.06% 00:34:16.588 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:16.588 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:16.848 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:16.848 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:16.848 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:16.848 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:17.108 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:17.108 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1287920 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:17.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:17.369 nvmf hotplug test: fio failed as expected 00:34:17.369 12:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:17.630 rmmod nvme_tcp 00:34:17.630 rmmod nvme_fabrics 00:34:17.630 rmmod nvme_keyring 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1284745 ']' 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1284745 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1284745 ']' 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1284745 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1284745 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1284745' 00:34:17.630 killing process with pid 1284745 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1284745 00:34:17.630 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1284745 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:17.891 12:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.803 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.803 00:34:19.803 real 0m28.140s 00:34:19.803 user 2m22.317s 00:34:19.803 sys 0m12.515s 00:34:19.803 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:19.803 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.803 ************************************ 00:34:19.803 END TEST nvmf_fio_target 00:34:19.803 ************************************ 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:20.063 ************************************ 00:34:20.063 START TEST nvmf_bdevio 00:34:20.063 ************************************ 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:20.063 * Looking for test storage... 00:34:20.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:20.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.063 --rc genhtml_branch_coverage=1 00:34:20.063 --rc genhtml_function_coverage=1 00:34:20.063 --rc genhtml_legend=1 00:34:20.063 --rc geninfo_all_blocks=1 00:34:20.063 --rc geninfo_unexecuted_blocks=1 00:34:20.063 00:34:20.063 ' 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:20.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.063 --rc genhtml_branch_coverage=1 00:34:20.063 --rc genhtml_function_coverage=1 00:34:20.063 --rc genhtml_legend=1 00:34:20.063 --rc geninfo_all_blocks=1 00:34:20.063 --rc geninfo_unexecuted_blocks=1 00:34:20.063 00:34:20.063 ' 00:34:20.063 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:20.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.064 --rc genhtml_branch_coverage=1 00:34:20.064 --rc genhtml_function_coverage=1 00:34:20.064 --rc genhtml_legend=1 00:34:20.064 --rc geninfo_all_blocks=1 00:34:20.064 --rc geninfo_unexecuted_blocks=1 00:34:20.064 00:34:20.064 ' 00:34:20.064 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:20.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.064 --rc genhtml_branch_coverage=1 00:34:20.064 --rc genhtml_function_coverage=1 00:34:20.064 --rc genhtml_legend=1 00:34:20.064 --rc geninfo_all_blocks=1 00:34:20.064 --rc geninfo_unexecuted_blocks=1 00:34:20.064 00:34:20.064 ' 00:34:20.064 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.064 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:20.325 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:20.326 12:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:28.472 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:28.472 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:28.472 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:28.472 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:28.472 12:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:28.472 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:28.472 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:28.472 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:28.472 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:28.472 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:28.472 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:28.472 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:28.472 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:28.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:34:28.473 00:34:28.473 --- 10.0.0.2 ping statistics --- 00:34:28.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.473 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:28.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:34:28.473 00:34:28.473 --- 10.0.0.1 ping statistics --- 00:34:28.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.473 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1293147 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1293147 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1293147 ']' 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:28.473 12:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:28.473 [2024-10-11 12:08:12.276034] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:28.473 [2024-10-11 12:08:12.277136] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:34:28.473 [2024-10-11 12:08:12.277185] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.473 [2024-10-11 12:08:12.366970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:28.473 [2024-10-11 12:08:12.419815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.473 [2024-10-11 12:08:12.419872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.473 [2024-10-11 12:08:12.419880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.473 [2024-10-11 12:08:12.419887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.473 [2024-10-11 12:08:12.419893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.473 [2024-10-11 12:08:12.421955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:28.473 [2024-10-11 12:08:12.422115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:28.473 [2024-10-11 12:08:12.422255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:28.473 [2024-10-11 12:08:12.422255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:28.473 [2024-10-11 12:08:12.498914] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.473 [2024-10-11 12:08:12.499911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:28.473 [2024-10-11 12:08:12.500155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:28.473 [2024-10-11 12:08:12.500511] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:28.473 [2024-10-11 12:08:12.500555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:28.473 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:28.473 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:34:28.473 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:28.473 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:28.473 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:28.734 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.734 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:28.734 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.734 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:28.734 [2024-10-11 12:08:13.139248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:28.735 Malloc0 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:28.735 [2024-10-11 12:08:13.231575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:28.735 { 00:34:28.735 "params": { 00:34:28.735 "name": "Nvme$subsystem", 00:34:28.735 "trtype": "$TEST_TRANSPORT", 00:34:28.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:28.735 "adrfam": "ipv4", 00:34:28.735 "trsvcid": "$NVMF_PORT", 00:34:28.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:28.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:28.735 "hdgst": ${hdgst:-false}, 00:34:28.735 "ddgst": ${ddgst:-false} 00:34:28.735 }, 00:34:28.735 "method": "bdev_nvme_attach_controller" 00:34:28.735 } 00:34:28.735 EOF 00:34:28.735 )") 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:34:28.735 12:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:28.735 "params": { 00:34:28.735 "name": "Nvme1", 00:34:28.735 "trtype": "tcp", 00:34:28.735 "traddr": "10.0.0.2", 00:34:28.735 "adrfam": "ipv4", 00:34:28.735 "trsvcid": "4420", 00:34:28.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:28.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:28.735 "hdgst": false, 00:34:28.735 "ddgst": false 00:34:28.735 }, 00:34:28.735 "method": "bdev_nvme_attach_controller" 00:34:28.735 }' 00:34:28.735 [2024-10-11 12:08:13.290268] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:34:28.735 [2024-10-11 12:08:13.290340] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293489 ] 00:34:28.996 [2024-10-11 12:08:13.372912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:28.996 [2024-10-11 12:08:13.429316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.996 [2024-10-11 12:08:13.429469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:28.996 [2024-10-11 12:08:13.429471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.257 I/O targets: 00:34:29.257 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:29.257 00:34:29.257 00:34:29.257 CUnit - A unit testing framework for C - Version 2.1-3 00:34:29.257 http://cunit.sourceforge.net/ 00:34:29.257 00:34:29.257 00:34:29.257 Suite: bdevio tests on: Nvme1n1 00:34:29.257 Test: blockdev write read block ...passed 00:34:29.257 Test: blockdev write zeroes read block ...passed 00:34:29.257 Test: blockdev write zeroes read no split ...passed 00:34:29.257 Test: blockdev write zeroes read split ...passed 00:34:29.257 Test: blockdev write zeroes read split partial ...passed 00:34:29.257 Test: blockdev reset ...[2024-10-11 12:08:13.883547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:29.257 [2024-10-11 12:08:13.883651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1664510 (9): Bad file descriptor 00:34:29.518 [2024-10-11 12:08:13.932543] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:29.518 passed 00:34:29.518 Test: blockdev write read 8 blocks ...passed 00:34:29.518 Test: blockdev write read size > 128k ...passed 00:34:29.518 Test: blockdev write read invalid size ...passed 00:34:29.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:29.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:29.518 Test: blockdev write read max offset ...passed 00:34:29.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:29.518 Test: blockdev writev readv 8 blocks ...passed 00:34:29.518 Test: blockdev writev readv 30 x 1block ...passed 00:34:29.780 Test: blockdev writev readv block ...passed 00:34:29.780 Test: blockdev writev readv size > 128k ...passed 00:34:29.780 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:29.780 Test: blockdev comparev and writev ...[2024-10-11 12:08:14.197981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:29.780 [2024-10-11 12:08:14.198030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.198046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:29.780 [2024-10-11 12:08:14.198055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.198562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:29.780 [2024-10-11 12:08:14.198575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.198589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:29.780 [2024-10-11 12:08:14.198597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.199148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:29.780 [2024-10-11 12:08:14.199162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.199177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:29.780 [2024-10-11 12:08:14.199185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.199708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:29.780 [2024-10-11 12:08:14.199723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.199737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:29.780 [2024-10-11 12:08:14.199745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:29.780 passed 00:34:29.780 Test: blockdev nvme passthru rw ...passed 00:34:29.780 Test: blockdev nvme passthru vendor specific ...[2024-10-11 12:08:14.284471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:29.780 [2024-10-11 12:08:14.284488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.284906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:29.780 [2024-10-11 12:08:14.284923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.285192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:29.780 [2024-10-11 12:08:14.285203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:29.780 [2024-10-11 12:08:14.285610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:29.780 [2024-10-11 12:08:14.285623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:29.780 passed 00:34:29.780 Test: blockdev nvme admin passthru ...passed 00:34:29.780 Test: blockdev copy ...passed 00:34:29.780 00:34:29.780 Run Summary: Type Total Ran Passed Failed Inactive 00:34:29.780 suites 1 1 n/a 0 0 00:34:29.780 tests 23 23 23 0 0 00:34:29.780 asserts 152 152 152 0 n/a 00:34:29.780 00:34:29.780 Elapsed time = 1.188 seconds 00:34:30.041 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:30.041 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.041 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:30.041 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.041 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:30.041 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:30.041 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:30.041 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:30.041 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:30.042 rmmod nvme_tcp 00:34:30.042 rmmod nvme_fabrics 00:34:30.042 rmmod nvme_keyring 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1293147 ']' 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1293147 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1293147 ']' 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1293147 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1293147 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1293147' 00:34:30.042 killing process with pid 1293147 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1293147 00:34:30.042 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1293147 00:34:30.303 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:30.303 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:30.303 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:30.303 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:30.304 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:34:30.304 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:34:30.304 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:30.304 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:30.304 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:30.304 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.304 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.304 12:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.851 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:32.851 00:34:32.851 real 0m12.395s 00:34:32.851 user 0m10.462s 00:34:32.851 sys 0m6.555s 00:34:32.851 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.851 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.851 ************************************ 00:34:32.851 END TEST nvmf_bdevio 00:34:32.851 ************************************ 00:34:32.851 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:32.851 00:34:32.851 real 4m58.949s 00:34:32.851 user 10m16.039s 00:34:32.851 sys 2m4.910s 00:34:32.851 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.851 12:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:32.851 ************************************ 00:34:32.851 END TEST nvmf_target_core_interrupt_mode 00:34:32.851 ************************************ 00:34:32.851 12:08:16 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:32.851 12:08:16 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:32.851 12:08:16 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.851 12:08:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.851 ************************************ 00:34:32.851 START TEST nvmf_interrupt 00:34:32.851 ************************************ 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:32.851 * Looking for test storage... 00:34:32.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:32.851 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:32.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.852 --rc genhtml_branch_coverage=1 00:34:32.852 --rc genhtml_function_coverage=1 00:34:32.852 --rc genhtml_legend=1 00:34:32.852 --rc geninfo_all_blocks=1 00:34:32.852 --rc geninfo_unexecuted_blocks=1 00:34:32.852 00:34:32.852 ' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:32.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.852 --rc genhtml_branch_coverage=1 00:34:32.852 --rc genhtml_function_coverage=1 00:34:32.852 --rc genhtml_legend=1 00:34:32.852 --rc geninfo_all_blocks=1 00:34:32.852 --rc geninfo_unexecuted_blocks=1 00:34:32.852 00:34:32.852 ' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:32.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.852 --rc genhtml_branch_coverage=1 00:34:32.852 --rc genhtml_function_coverage=1 00:34:32.852 --rc genhtml_legend=1 00:34:32.852 --rc geninfo_all_blocks=1 00:34:32.852 --rc geninfo_unexecuted_blocks=1 00:34:32.852 00:34:32.852 ' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:32.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.852 --rc genhtml_branch_coverage=1 00:34:32.852 --rc genhtml_function_coverage=1 00:34:32.852 --rc genhtml_legend=1 00:34:32.852 --rc geninfo_all_blocks=1 00:34:32.852 --rc geninfo_unexecuted_blocks=1 00:34:32.852 00:34:32.852 ' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:32.852 12:08:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.997 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:40.998 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:40.998 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:40.998 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:40.998 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:40.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:40.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:34:40.998 00:34:40.998 --- 10.0.0.2 ping statistics --- 00:34:40.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.998 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:40.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:40.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:34:40.998 00:34:40.998 --- 10.0.0.1 ping statistics --- 00:34:40.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.998 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1297830 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1297830 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1297830 ']' 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:40.998 12:08:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:40.998 [2024-10-11 12:08:24.791431] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:40.998 [2024-10-11 12:08:24.792532] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:34:40.998 [2024-10-11 12:08:24.792579] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.998 [2024-10-11 12:08:24.880282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:40.998 [2024-10-11 12:08:24.931724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.998 [2024-10-11 12:08:24.931771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.998 [2024-10-11 12:08:24.931779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.998 [2024-10-11 12:08:24.931786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.998 [2024-10-11 12:08:24.931793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.998 [2024-10-11 12:08:24.933537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.998 [2024-10-11 12:08:24.933541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.998 [2024-10-11 12:08:25.009862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:40.998 [2024-10-11 12:08:25.010559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:40.998 [2024-10-11 12:08:25.010770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:40.998 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:40.998 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:40.998 12:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:40.998 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:40.998 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:41.259 5000+0 records in 00:34:41.259 5000+0 records out 00:34:41.259 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0199887 s, 512 MB/s 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:41.259 AIO0 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:41.259 [2024-10-11 12:08:25.726643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:41.259 [2024-10-11 12:08:25.771071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1297830 0 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1297830 0 idle 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1297830 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:41.259 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:41.260 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1297830 -w 256 00:34:41.260 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:41.519 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1297830 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.30 reactor_0' 00:34:41.519 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1297830 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.30 reactor_0 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1297830 1 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1297830 1 idle 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1297830 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1297830 -w 256 00:34:41.520 12:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:41.520 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1297834 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:41.520 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1297834 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:41.520 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:41.520 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1298195 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1297830 0 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1297830 0 busy 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1297830 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1297830 -w 256 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1297830 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.31 reactor_0' 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1297830 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.31 reactor_0 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:41.780 12:08:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:42.723 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:42.723 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1297830 -w 256 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1297830 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.64 reactor_0' 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1297830 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.64 reactor_0 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1297830 1 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1297830 1 busy 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1297830 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1297830 -w 256 00:34:42.985 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1297834 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.36 reactor_1' 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1297834 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:01.36 reactor_1 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:43.246 12:08:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1298195 00:34:53.249 Initializing NVMe Controllers 00:34:53.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:53.249 Controller IO queue size 256, less than required. 00:34:53.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:53.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:53.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:53.249 Initialization complete. Launching workers. 00:34:53.249 ======================================================== 00:34:53.249 Latency(us) 00:34:53.249 Device Information : IOPS MiB/s Average min max 00:34:53.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18768.80 73.32 13644.16 4725.72 33609.73 00:34:53.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19197.90 74.99 13336.59 7770.28 30246.99 00:34:53.249 ======================================================== 00:34:53.249 Total : 37966.70 148.31 13488.64 4725.72 33609.73 00:34:53.249 00:34:53.249 [2024-10-11 12:08:36.390387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a50c20 is same with the state(6) to be set 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1297830 0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1297830 0 idle 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1297830 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1297830 -w 256 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1297830 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0' 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1297830 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1297830 1 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1297830 1 idle 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1297830 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1297830 -w 256 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1297834 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1297834 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:53.249 12:08:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:53.249 12:08:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:53.249 12:08:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:34:53.249 12:08:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:53.249 12:08:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:34:53.249 12:08:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:34:55.163 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1297830 0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1297830 0 idle 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1297830 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1297830 -w 256 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1297830 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0' 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1297830 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1297830 1 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1297830 1 idle 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1297830 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1297830 -w 256 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1297834 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1297834 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:55.164 12:08:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:55.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:55.425 rmmod nvme_tcp 00:34:55.425 rmmod nvme_fabrics 00:34:55.425 rmmod nvme_keyring 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1297830 ']' 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1297830 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1297830 ']' 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1297830 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:55.425 12:08:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1297830 00:34:55.425 12:08:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:55.425 12:08:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:55.425 12:08:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1297830' 00:34:55.425 killing process with pid 1297830 00:34:55.425 12:08:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1297830 00:34:55.425 12:08:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1297830 00:34:55.686 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:55.686 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:55.686 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:55.686 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:55.686 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:34:55.686 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:55.686 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:34:55.686 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:55.686 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:55.687 12:08:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.687 12:08:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:55.687 12:08:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.231 12:08:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:58.231 00:34:58.231 real 0m25.254s 00:34:58.231 user 0m40.339s 00:34:58.231 sys 0m9.708s 00:34:58.231 12:08:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:58.231 12:08:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:58.231 ************************************ 00:34:58.231 END TEST nvmf_interrupt 00:34:58.231 ************************************ 00:34:58.231 00:34:58.231 real 29m27.358s 00:34:58.231 user 60m37.651s 00:34:58.231 sys 10m1.177s 00:34:58.231 12:08:42 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:58.231 12:08:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.231 ************************************ 00:34:58.231 END TEST nvmf_tcp 00:34:58.231 ************************************ 00:34:58.231 12:08:42 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:58.231 12:08:42 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:58.231 12:08:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:58.231 12:08:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:58.231 12:08:42 -- common/autotest_common.sh@10 -- # set +x 00:34:58.231 ************************************ 00:34:58.231 START TEST spdkcli_nvmf_tcp 00:34:58.231 ************************************ 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:58.231 * Looking for test storage... 00:34:58.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:58.231 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:58.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.232 --rc genhtml_branch_coverage=1 00:34:58.232 --rc genhtml_function_coverage=1 00:34:58.232 --rc genhtml_legend=1 00:34:58.232 --rc geninfo_all_blocks=1 00:34:58.232 --rc geninfo_unexecuted_blocks=1 00:34:58.232 00:34:58.232 ' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:58.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.232 --rc genhtml_branch_coverage=1 00:34:58.232 --rc genhtml_function_coverage=1 00:34:58.232 --rc genhtml_legend=1 00:34:58.232 --rc geninfo_all_blocks=1 00:34:58.232 --rc geninfo_unexecuted_blocks=1 00:34:58.232 00:34:58.232 ' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:58.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.232 --rc genhtml_branch_coverage=1 00:34:58.232 --rc genhtml_function_coverage=1 00:34:58.232 --rc genhtml_legend=1 00:34:58.232 --rc geninfo_all_blocks=1 00:34:58.232 --rc geninfo_unexecuted_blocks=1 00:34:58.232 00:34:58.232 ' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:58.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.232 --rc genhtml_branch_coverage=1 00:34:58.232 --rc genhtml_function_coverage=1 00:34:58.232 --rc genhtml_legend=1 00:34:58.232 --rc geninfo_all_blocks=1 00:34:58.232 --rc geninfo_unexecuted_blocks=1 00:34:58.232 00:34:58.232 ' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:58.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1301385 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1301385 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1301385 ']' 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:58.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:58.232 12:08:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.232 [2024-10-11 12:08:42.682789] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:34:58.232 [2024-10-11 12:08:42.682847] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301385 ] 00:34:58.232 [2024-10-11 12:08:42.760129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:58.232 [2024-10-11 12:08:42.811055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:58.232 [2024-10-11 12:08:42.811060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.173 12:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:59.173 12:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:34:59.173 12:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:59.173 12:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:59.173 12:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.173 12:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:59.173 12:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:59.173 12:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:59.174 12:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:59.174 12:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.174 12:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:59.174 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:59.174 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:59.174 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:59.174 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:59.174 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:59.174 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:59.174 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:59.174 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:59.174 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:59.174 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:59.174 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:59.174 ' 00:35:01.715 [2024-10-11 12:08:46.224771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:03.096 [2024-10-11 12:08:47.585019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:05.681 [2024-10-11 12:08:50.112332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:08.223 [2024-10-11 12:08:52.318606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:09.606 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:09.606 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:09.606 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:09.606 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:09.606 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:09.606 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:09.606 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:09.606 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:09.606 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:09.606 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:09.606 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:09.606 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:09.606 12:08:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:09.606 12:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:09.606 12:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:09.606 12:08:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:09.606 12:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:09.606 12:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:09.606 12:08:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:09.606 12:08:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:10.178 12:08:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:10.178 12:08:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:10.178 12:08:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:10.178 12:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:10.178 12:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:10.178 12:08:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:10.178 12:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:10.178 12:08:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:10.178 12:08:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:10.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:10.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:10.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:10.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:10.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:10.178 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:10.178 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:10.178 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:10.178 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:10.178 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:10.178 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:10.178 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:10.178 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:10.178 ' 00:35:16.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:16.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:16.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:16.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:16.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:16.760 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:16.760 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:16.760 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:16.760 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:16.760 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:16.760 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:16.760 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:16.760 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:16.760 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1301385 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1301385 ']' 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1301385 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1301385 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1301385' 00:35:16.760 killing process with pid 1301385 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1301385 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1301385 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:16.760 12:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1301385 ']' 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1301385 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1301385 ']' 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1301385 00:35:16.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1301385) - No such process 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1301385 is not found' 00:35:16.761 Process with pid 1301385 is not found 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:16.761 00:35:16.761 real 0m18.103s 00:35:16.761 user 0m40.215s 00:35:16.761 sys 0m0.854s 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:16.761 12:09:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:16.761 ************************************ 00:35:16.761 END TEST spdkcli_nvmf_tcp 00:35:16.761 ************************************ 00:35:16.761 12:09:00 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:16.761 12:09:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:16.761 12:09:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:16.761 12:09:00 -- common/autotest_common.sh@10 -- # set +x 00:35:16.761 ************************************ 00:35:16.761 START TEST nvmf_identify_passthru 00:35:16.761 ************************************ 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:16.761 * Looking for test storage... 00:35:16.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.761 --rc genhtml_branch_coverage=1 00:35:16.761 --rc genhtml_function_coverage=1 00:35:16.761 --rc genhtml_legend=1 00:35:16.761 --rc geninfo_all_blocks=1 00:35:16.761 --rc geninfo_unexecuted_blocks=1 00:35:16.761 00:35:16.761 ' 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.761 --rc genhtml_branch_coverage=1 00:35:16.761 --rc genhtml_function_coverage=1 00:35:16.761 --rc genhtml_legend=1 00:35:16.761 --rc geninfo_all_blocks=1 00:35:16.761 --rc geninfo_unexecuted_blocks=1 00:35:16.761 00:35:16.761 ' 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.761 --rc genhtml_branch_coverage=1 00:35:16.761 --rc genhtml_function_coverage=1 00:35:16.761 --rc genhtml_legend=1 00:35:16.761 --rc geninfo_all_blocks=1 00:35:16.761 --rc geninfo_unexecuted_blocks=1 00:35:16.761 00:35:16.761 ' 00:35:16.761 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.761 --rc genhtml_branch_coverage=1 00:35:16.761 --rc genhtml_function_coverage=1 00:35:16.761 --rc genhtml_legend=1 00:35:16.761 --rc geninfo_all_blocks=1 00:35:16.761 --rc geninfo_unexecuted_blocks=1 00:35:16.761 00:35:16.761 ' 00:35:16.761 12:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.761 12:09:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.761 12:09:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.761 12:09:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.761 12:09:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.761 12:09:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:16.761 12:09:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.761 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.761 12:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.762 12:09:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.762 12:09:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.762 12:09:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.762 12:09:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.762 12:09:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.762 12:09:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.762 12:09:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.762 12:09:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:16.762 12:09:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.762 12:09:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:16.762 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:16.762 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.762 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:16.762 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:16.762 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:16.762 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.762 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:16.762 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.762 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:16.762 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:16.762 12:09:00 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:16.762 12:09:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:23.424 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:23.424 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:23.424 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:23.424 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:23.424 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:23.425 12:09:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:23.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:23.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:35:23.686 00:35:23.686 --- 10.0.0.2 ping statistics --- 00:35:23.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.686 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:23.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:23.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:35:23.686 00:35:23.686 --- 10.0.0.1 ping statistics --- 00:35:23.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:23.686 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:23.686 12:09:08 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:23.686 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:23.686 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:23.686 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:23.948 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:23.948 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:23.948 12:09:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:23.948 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:23.948 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:23.948 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:23.948 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:23.948 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:24.518 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:24.518 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:24.518 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:24.518 12:09:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:24.779 12:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:24.779 12:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:24.779 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.779 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.040 12:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:25.040 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:25.040 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.040 12:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1309052 00:35:25.040 12:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:25.040 12:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:25.040 12:09:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1309052 00:35:25.040 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1309052 ']' 00:35:25.040 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.040 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:25.040 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.040 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:25.040 12:09:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.040 [2024-10-11 12:09:09.476293] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:35:25.040 [2024-10-11 12:09:09.476365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.040 [2024-10-11 12:09:09.564503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:25.040 [2024-10-11 12:09:09.619112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.040 [2024-10-11 12:09:09.619167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.040 [2024-10-11 12:09:09.619176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.040 [2024-10-11 12:09:09.619184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.040 [2024-10-11 12:09:09.619191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.040 [2024-10-11 12:09:09.621288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.040 [2024-10-11 12:09:09.621451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:25.040 [2024-10-11 12:09:09.621612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:25.040 [2024-10-11 12:09:09.621612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.984 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:25.984 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:35:25.984 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:25.984 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.984 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.984 INFO: Log level set to 20 00:35:25.984 INFO: Requests: 00:35:25.984 { 00:35:25.984 "jsonrpc": "2.0", 00:35:25.984 "method": "nvmf_set_config", 00:35:25.984 "id": 1, 00:35:25.984 "params": { 00:35:25.984 "admin_cmd_passthru": { 00:35:25.984 "identify_ctrlr": true 00:35:25.984 } 00:35:25.984 } 00:35:25.984 } 00:35:25.984 00:35:25.984 INFO: response: 00:35:25.984 { 00:35:25.984 "jsonrpc": "2.0", 00:35:25.984 "id": 1, 00:35:25.984 "result": true 00:35:25.984 } 00:35:25.984 00:35:25.984 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.984 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:25.984 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.984 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.984 INFO: Setting log level to 20 00:35:25.984 INFO: Setting log level to 20 00:35:25.984 INFO: Log level set to 20 00:35:25.984 INFO: Log level set to 20 00:35:25.984 INFO: Requests: 00:35:25.984 { 00:35:25.984 "jsonrpc": "2.0", 00:35:25.984 "method": "framework_start_init", 00:35:25.984 "id": 1 00:35:25.984 } 00:35:25.984 00:35:25.984 INFO: Requests: 00:35:25.984 { 00:35:25.984 "jsonrpc": "2.0", 00:35:25.984 "method": "framework_start_init", 00:35:25.984 "id": 1 00:35:25.984 } 00:35:25.984 00:35:25.984 [2024-10-11 12:09:10.384401] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:25.984 INFO: response: 00:35:25.984 { 00:35:25.984 "jsonrpc": "2.0", 00:35:25.984 "id": 1, 00:35:25.984 "result": true 00:35:25.984 } 00:35:25.984 00:35:25.984 INFO: response: 00:35:25.984 { 00:35:25.984 "jsonrpc": "2.0", 00:35:25.984 "id": 1, 00:35:25.984 "result": true 00:35:25.984 } 00:35:25.984 00:35:25.985 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.985 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:25.985 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.985 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.985 INFO: Setting log level to 40 00:35:25.985 INFO: Setting log level to 40 00:35:25.985 INFO: Setting log level to 40 00:35:25.985 [2024-10-11 12:09:10.397954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.985 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.985 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:25.985 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:25.985 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:25.985 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:25.985 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.985 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.246 Nvme0n1 00:35:26.246 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.246 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:26.246 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.246 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.246 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.246 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:26.246 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.246 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.246 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.246 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:26.246 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.247 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.247 [2024-10-11 12:09:10.807940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.247 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.247 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:26.247 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.247 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.247 [ 00:35:26.247 { 00:35:26.247 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:26.247 "subtype": "Discovery", 00:35:26.247 "listen_addresses": [], 00:35:26.247 "allow_any_host": true, 00:35:26.247 "hosts": [] 00:35:26.247 }, 00:35:26.247 { 00:35:26.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:26.247 "subtype": "NVMe", 00:35:26.247 "listen_addresses": [ 00:35:26.247 { 00:35:26.247 "trtype": "TCP", 00:35:26.247 "adrfam": "IPv4", 00:35:26.247 "traddr": "10.0.0.2", 00:35:26.247 "trsvcid": "4420" 00:35:26.247 } 00:35:26.247 ], 00:35:26.247 "allow_any_host": true, 00:35:26.247 "hosts": [], 00:35:26.247 "serial_number": "SPDK00000000000001", 00:35:26.247 "model_number": "SPDK bdev Controller", 00:35:26.247 "max_namespaces": 1, 00:35:26.247 "min_cntlid": 1, 00:35:26.247 "max_cntlid": 65519, 00:35:26.247 "namespaces": [ 00:35:26.247 { 00:35:26.247 "nsid": 1, 00:35:26.247 "bdev_name": "Nvme0n1", 00:35:26.247 "name": "Nvme0n1", 00:35:26.247 "nguid": "36344730526054870025384500000044", 00:35:26.247 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:26.247 } 00:35:26.247 ] 00:35:26.247 } 00:35:26.247 ] 00:35:26.247 12:09:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.247 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:26.247 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:26.247 12:09:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:26.509 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:26.509 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:26.509 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:26.509 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:26.770 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:26.770 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:26.770 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:26.770 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.770 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:26.770 12:09:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:26.770 rmmod nvme_tcp 00:35:26.770 rmmod nvme_fabrics 00:35:26.770 rmmod nvme_keyring 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1309052 ']' 00:35:26.770 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1309052 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1309052 ']' 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1309052 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1309052 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1309052' 00:35:26.770 killing process with pid 1309052 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1309052 00:35:26.770 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1309052 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:27.031 12:09:11 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.031 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:27.031 12:09:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.575 12:09:13 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:29.575 00:35:29.575 real 0m13.096s 00:35:29.575 user 0m10.281s 00:35:29.575 sys 0m6.459s 00:35:29.575 12:09:13 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:29.575 12:09:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:29.575 ************************************ 00:35:29.575 END TEST nvmf_identify_passthru 00:35:29.575 ************************************ 00:35:29.575 12:09:13 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:29.575 12:09:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:29.575 12:09:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:29.575 12:09:13 -- common/autotest_common.sh@10 -- # set +x 00:35:29.575 ************************************ 00:35:29.575 START TEST nvmf_dif 00:35:29.575 ************************************ 00:35:29.575 12:09:13 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:29.575 * Looking for test storage... 00:35:29.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.576 --rc genhtml_branch_coverage=1 00:35:29.576 --rc genhtml_function_coverage=1 00:35:29.576 --rc genhtml_legend=1 00:35:29.576 --rc geninfo_all_blocks=1 00:35:29.576 --rc geninfo_unexecuted_blocks=1 00:35:29.576 00:35:29.576 ' 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.576 --rc genhtml_branch_coverage=1 00:35:29.576 --rc genhtml_function_coverage=1 00:35:29.576 --rc genhtml_legend=1 00:35:29.576 --rc geninfo_all_blocks=1 00:35:29.576 --rc geninfo_unexecuted_blocks=1 00:35:29.576 00:35:29.576 ' 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.576 --rc genhtml_branch_coverage=1 00:35:29.576 --rc genhtml_function_coverage=1 00:35:29.576 --rc genhtml_legend=1 00:35:29.576 --rc geninfo_all_blocks=1 00:35:29.576 --rc geninfo_unexecuted_blocks=1 00:35:29.576 00:35:29.576 ' 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.576 --rc genhtml_branch_coverage=1 00:35:29.576 --rc genhtml_function_coverage=1 00:35:29.576 --rc genhtml_legend=1 00:35:29.576 --rc geninfo_all_blocks=1 00:35:29.576 --rc geninfo_unexecuted_blocks=1 00:35:29.576 00:35:29.576 ' 00:35:29.576 12:09:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.576 12:09:13 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.576 12:09:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.576 12:09:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.576 12:09:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.576 12:09:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:29.576 12:09:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:29.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:29.576 12:09:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:29.576 12:09:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:29.576 12:09:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:29.576 12:09:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:29.576 12:09:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:29.576 12:09:13 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:29.576 12:09:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:37.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:37.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:37.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:37.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:37.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:37.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:35:37.717 00:35:37.717 --- 10.0.0.2 ping statistics --- 00:35:37.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.717 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:37.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:37.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:35:37.717 00:35:37.717 --- 10.0.0.1 ping statistics --- 00:35:37.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.717 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:37.717 12:09:21 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:40.263 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:40.263 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:40.263 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:40.524 12:09:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:40.524 12:09:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:40.524 12:09:24 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:40.524 12:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1315315 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1315315 00:35:40.524 12:09:24 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:40.524 12:09:24 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1315315 ']' 00:35:40.524 12:09:24 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.524 12:09:24 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:40.524 12:09:24 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.524 12:09:24 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:40.524 12:09:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:40.524 [2024-10-11 12:09:25.033820] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:35:40.524 [2024-10-11 12:09:25.033882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.524 [2024-10-11 12:09:25.121082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.785 [2024-10-11 12:09:25.172951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.785 [2024-10-11 12:09:25.173003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.785 [2024-10-11 12:09:25.173011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.785 [2024-10-11 12:09:25.173018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.785 [2024-10-11 12:09:25.173024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.785 [2024-10-11 12:09:25.173794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:41.356 12:09:25 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:41.356 12:09:25 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.356 12:09:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:41.356 12:09:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:41.356 [2024-10-11 12:09:25.915354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.356 12:09:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:41.356 12:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:41.356 ************************************ 00:35:41.356 START TEST fio_dif_1_default 00:35:41.356 ************************************ 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:41.356 bdev_null0 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:41.356 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.357 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:41.617 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.617 12:09:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:41.617 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.617 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:41.617 12:09:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.617 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:41.617 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.617 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:41.617 [2024-10-11 12:09:26.007826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.617 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:41.618 { 00:35:41.618 "params": { 00:35:41.618 "name": "Nvme$subsystem", 00:35:41.618 "trtype": "$TEST_TRANSPORT", 00:35:41.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:41.618 "adrfam": "ipv4", 00:35:41.618 "trsvcid": "$NVMF_PORT", 00:35:41.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:41.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:41.618 "hdgst": ${hdgst:-false}, 00:35:41.618 "ddgst": ${ddgst:-false} 00:35:41.618 }, 00:35:41.618 "method": "bdev_nvme_attach_controller" 00:35:41.618 } 00:35:41.618 EOF 00:35:41.618 )") 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:41.618 "params": { 00:35:41.618 "name": "Nvme0", 00:35:41.618 "trtype": "tcp", 00:35:41.618 "traddr": "10.0.0.2", 00:35:41.618 "adrfam": "ipv4", 00:35:41.618 "trsvcid": "4420", 00:35:41.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:41.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:41.618 "hdgst": false, 00:35:41.618 "ddgst": false 00:35:41.618 }, 00:35:41.618 "method": "bdev_nvme_attach_controller" 00:35:41.618 }' 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:41.618 12:09:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:41.878 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:41.878 fio-3.35 00:35:41.878 Starting 1 thread 00:35:54.107 00:35:54.107 filename0: (groupid=0, jobs=1): err= 0: pid=1315894: Fri Oct 11 12:09:37 2024 00:35:54.107 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10004msec) 00:35:54.107 slat (nsec): min=5646, max=32812, avg=6516.24, stdev=1474.81 00:35:54.107 clat (usec): min=507, max=42409, avg=21086.61, stdev=20166.66 00:35:54.107 lat (usec): min=512, max=42442, avg=21093.13, stdev=20166.65 00:35:54.107 clat percentiles (usec): 00:35:54.107 | 1.00th=[ 586], 5.00th=[ 783], 10.00th=[ 816], 20.00th=[ 840], 00:35:54.107 | 30.00th=[ 848], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:35:54.107 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:54.107 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:54.107 | 99.99th=[42206] 00:35:54.107 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:35:54.107 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:35:54.107 lat (usec) : 750=3.53%, 1000=45.68% 00:35:54.107 lat (msec) : 2=0.58%, 50=50.21% 00:35:54.107 cpu : usr=94.22%, sys=5.57%, ctx=10, majf=0, minf=232 00:35:54.107 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:54.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.107 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.107 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:54.107 00:35:54.107 Run status group 0 (all jobs): 00:35:54.107 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10004-10004msec 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.107 00:35:54.107 real 0m11.240s 00:35:54.107 user 0m18.238s 00:35:54.107 sys 0m0.964s 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:54.107 12:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:54.107 ************************************ 00:35:54.107 END TEST fio_dif_1_default 00:35:54.107 ************************************ 00:35:54.107 12:09:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:54.107 12:09:37 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:54.108 12:09:37 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:54.108 12:09:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:54.108 ************************************ 00:35:54.108 START TEST fio_dif_1_multi_subsystems 00:35:54.108 ************************************ 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.108 bdev_null0 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.108 [2024-10-11 12:09:37.327596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.108 bdev_null1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:54.108 { 00:35:54.108 "params": { 00:35:54.108 "name": "Nvme$subsystem", 00:35:54.108 "trtype": "$TEST_TRANSPORT", 00:35:54.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.108 "adrfam": "ipv4", 00:35:54.108 "trsvcid": "$NVMF_PORT", 00:35:54.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.108 "hdgst": ${hdgst:-false}, 00:35:54.108 "ddgst": ${ddgst:-false} 00:35:54.108 }, 00:35:54.108 "method": "bdev_nvme_attach_controller" 00:35:54.108 } 00:35:54.108 EOF 00:35:54.108 )") 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:54.108 { 00:35:54.108 "params": { 00:35:54.108 "name": "Nvme$subsystem", 00:35:54.108 "trtype": "$TEST_TRANSPORT", 00:35:54.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:54.108 "adrfam": "ipv4", 00:35:54.108 "trsvcid": "$NVMF_PORT", 00:35:54.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:54.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:54.108 "hdgst": ${hdgst:-false}, 00:35:54.108 "ddgst": ${ddgst:-false} 00:35:54.108 }, 00:35:54.108 "method": "bdev_nvme_attach_controller" 00:35:54.108 } 00:35:54.108 EOF 00:35:54.108 )") 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:54.108 "params": { 00:35:54.108 "name": "Nvme0", 00:35:54.108 "trtype": "tcp", 00:35:54.108 "traddr": "10.0.0.2", 00:35:54.108 "adrfam": "ipv4", 00:35:54.108 "trsvcid": "4420", 00:35:54.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.108 "hdgst": false, 00:35:54.108 "ddgst": false 00:35:54.108 }, 00:35:54.108 "method": "bdev_nvme_attach_controller" 00:35:54.108 },{ 00:35:54.108 "params": { 00:35:54.108 "name": "Nvme1", 00:35:54.108 "trtype": "tcp", 00:35:54.108 "traddr": "10.0.0.2", 00:35:54.108 "adrfam": "ipv4", 00:35:54.108 "trsvcid": "4420", 00:35:54.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:54.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:54.108 "hdgst": false, 00:35:54.108 "ddgst": false 00:35:54.108 }, 00:35:54.108 "method": "bdev_nvme_attach_controller" 00:35:54.108 }' 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:54.108 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:54.109 12:09:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:54.109 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:54.109 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:54.109 fio-3.35 00:35:54.109 Starting 2 threads 00:36:04.105 00:36:04.105 filename0: (groupid=0, jobs=1): err= 0: pid=1318266: Fri Oct 11 12:09:48 2024 00:36:04.105 read: IOPS=189, BW=758KiB/s (777kB/s)(7616KiB/10041msec) 00:36:04.105 slat (nsec): min=5646, max=52975, avg=6537.61, stdev=1992.08 00:36:04.105 clat (usec): min=595, max=42074, avg=21075.09, stdev=20160.17 00:36:04.105 lat (usec): min=601, max=42104, avg=21081.63, stdev=20160.06 00:36:04.105 clat percentiles (usec): 00:36:04.105 | 1.00th=[ 693], 5.00th=[ 734], 10.00th=[ 791], 20.00th=[ 832], 00:36:04.105 | 30.00th=[ 848], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:36:04.105 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:04.105 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:36:04.105 | 99.99th=[42206] 00:36:04.105 bw ( KiB/s): min= 672, max= 768, per=57.12%, avg=760.00, stdev=25.16, samples=20 00:36:04.105 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:36:04.105 lat (usec) : 750=6.30%, 1000=41.81% 00:36:04.105 lat (msec) : 2=1.68%, 50=50.21% 00:36:04.105 cpu : usr=95.93%, sys=3.86%, ctx=14, majf=0, minf=224 00:36:04.105 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.105 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.105 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:04.105 filename1: (groupid=0, jobs=1): err= 0: pid=1318267: Fri Oct 11 12:09:48 2024 00:36:04.105 read: IOPS=143, BW=573KiB/s (587kB/s)(5744KiB/10017msec) 00:36:04.105 slat (nsec): min=5646, max=29871, avg=6451.80, stdev=1775.43 00:36:04.105 clat (usec): min=690, max=42937, avg=27883.09, stdev=18924.90 00:36:04.105 lat (usec): min=696, max=42944, avg=27889.54, stdev=18925.31 00:36:04.105 clat percentiles (usec): 00:36:04.105 | 1.00th=[ 775], 5.00th=[ 816], 10.00th=[ 832], 20.00th=[ 865], 00:36:04.105 | 30.00th=[ 889], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:04.105 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:04.105 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:04.105 | 99.99th=[42730] 00:36:04.105 bw ( KiB/s): min= 384, max= 768, per=42.99%, avg=572.80, stdev=179.79, samples=20 00:36:04.105 iops : min= 96, max= 192, avg=143.20, stdev=44.95, samples=20 00:36:04.105 lat (usec) : 750=0.42%, 1000=32.45% 00:36:04.105 lat (msec) : 50=67.13% 00:36:04.105 cpu : usr=95.79%, sys=4.00%, ctx=14, majf=0, minf=96 00:36:04.105 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:04.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:04.105 issued rwts: total=1436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:04.105 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:04.105 00:36:04.105 Run status group 0 (all jobs): 00:36:04.105 READ: bw=1331KiB/s (1362kB/s), 573KiB/s-758KiB/s (587kB/s-777kB/s), io=13.0MiB (13.7MB), run=10017-10041msec 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.105 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.105 00:36:04.106 real 0m11.369s 00:36:04.106 user 0m34.882s 00:36:04.106 sys 0m1.181s 00:36:04.106 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:04.106 12:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:04.106 ************************************ 00:36:04.106 END TEST fio_dif_1_multi_subsystems 00:36:04.106 ************************************ 00:36:04.106 12:09:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:04.106 12:09:48 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:04.106 12:09:48 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:04.106 12:09:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:04.366 ************************************ 00:36:04.366 START TEST fio_dif_rand_params 00:36:04.366 ************************************ 00:36:04.366 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:36:04.366 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:04.366 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:04.366 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:04.366 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:04.366 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:04.366 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:04.366 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:04.366 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.367 bdev_null0 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:04.367 [2024-10-11 12:09:48.781679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:04.367 { 00:36:04.367 "params": { 00:36:04.367 "name": "Nvme$subsystem", 00:36:04.367 "trtype": "$TEST_TRANSPORT", 00:36:04.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.367 "adrfam": "ipv4", 00:36:04.367 "trsvcid": "$NVMF_PORT", 00:36:04.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.367 "hdgst": ${hdgst:-false}, 00:36:04.367 "ddgst": ${ddgst:-false} 00:36:04.367 }, 00:36:04.367 "method": "bdev_nvme_attach_controller" 00:36:04.367 } 00:36:04.367 EOF 00:36:04.367 )") 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:04.367 "params": { 00:36:04.367 "name": "Nvme0", 00:36:04.367 "trtype": "tcp", 00:36:04.367 "traddr": "10.0.0.2", 00:36:04.367 "adrfam": "ipv4", 00:36:04.367 "trsvcid": "4420", 00:36:04.367 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.367 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.367 "hdgst": false, 00:36:04.367 "ddgst": false 00:36:04.367 }, 00:36:04.367 "method": "bdev_nvme_attach_controller" 00:36:04.367 }' 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:04.367 12:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.627 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:04.627 ... 00:36:04.627 fio-3.35 00:36:04.627 Starting 3 threads 00:36:11.209 00:36:11.209 filename0: (groupid=0, jobs=1): err= 0: pid=1320463: Fri Oct 11 12:09:54 2024 00:36:11.209 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(186MiB/5046msec) 00:36:11.209 slat (nsec): min=5997, max=32412, avg=8901.44, stdev=1394.95 00:36:11.209 clat (usec): min=4829, max=90365, avg=10112.86, stdev=8144.74 00:36:11.209 lat (usec): min=4838, max=90374, avg=10121.76, stdev=8144.85 00:36:11.209 clat percentiles (usec): 00:36:11.209 | 1.00th=[ 5407], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7635], 00:36:11.209 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:36:11.209 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10814], 00:36:11.209 | 99.00th=[49021], 99.50th=[50070], 99.90th=[88605], 99.95th=[90702], 00:36:11.209 | 99.99th=[90702] 00:36:11.209 bw ( KiB/s): min=32000, max=44288, per=31.90%, avg=38118.40, stdev=3977.76, samples=10 00:36:11.209 iops : min= 250, max= 346, avg=297.80, stdev=31.08, samples=10 00:36:11.209 lat (msec) : 10=85.71%, 20=10.80%, 50=2.95%, 100=0.54% 00:36:11.209 cpu : usr=94.57%, sys=5.19%, ctx=8, majf=0, minf=126 00:36:11.209 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.209 issued rwts: total=1491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:11.209 filename0: (groupid=0, jobs=1): err= 0: pid=1320464: Fri Oct 11 12:09:54 2024 00:36:11.209 read: IOPS=319, BW=39.9MiB/s (41.8MB/s)(201MiB/5046msec) 00:36:11.209 slat (nsec): min=5778, max=32656, avg=9032.47, stdev=1372.10 00:36:11.209 clat (usec): min=4889, max=49634, avg=9360.02, stdev=5056.53 00:36:11.209 lat (usec): min=4898, max=49643, avg=9369.05, stdev=5056.63 00:36:11.209 clat percentiles (usec): 00:36:11.209 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7635], 00:36:11.209 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:36:11.209 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10421], 95.00th=[10945], 00:36:11.209 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49021], 99.95th=[49546], 00:36:11.209 | 99.99th=[49546] 00:36:11.209 bw ( KiB/s): min=31232, max=46592, per=34.47%, avg=41190.40, stdev=5223.42, samples=10 00:36:11.209 iops : min= 244, max= 364, avg=321.80, stdev=40.81, samples=10 00:36:11.209 lat (msec) : 10=80.82%, 20=17.57%, 50=1.61% 00:36:11.209 cpu : usr=94.71%, sys=5.03%, ctx=9, majf=0, minf=93 00:36:11.209 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.209 issued rwts: total=1611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:11.209 filename0: (groupid=0, jobs=1): err= 0: pid=1320465: Fri Oct 11 12:09:54 2024 00:36:11.209 read: IOPS=318, BW=39.9MiB/s (41.8MB/s)(201MiB/5046msec) 00:36:11.209 slat (nsec): min=5679, max=31546, avg=8843.93, stdev=1298.26 00:36:11.209 clat (usec): min=4293, max=51298, avg=9370.07, stdev=4754.35 00:36:11.209 lat (usec): min=4301, max=51307, avg=9378.91, stdev=4754.47 00:36:11.209 clat percentiles (usec): 00:36:11.209 | 1.00th=[ 5407], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7701], 00:36:11.209 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:36:11.209 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[11207], 00:36:11.209 | 99.00th=[46400], 99.50th=[47449], 99.90th=[49546], 99.95th=[51119], 00:36:11.209 | 99.99th=[51119] 00:36:11.209 bw ( KiB/s): min=30976, max=47104, per=34.43%, avg=41139.20, stdev=4299.06, samples=10 00:36:11.209 iops : min= 242, max= 368, avg=321.40, stdev=33.59, samples=10 00:36:11.209 lat (msec) : 10=80.17%, 20=18.40%, 50=1.37%, 100=0.06% 00:36:11.209 cpu : usr=94.27%, sys=5.47%, ctx=8, majf=0, minf=69 00:36:11.209 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:11.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:11.209 issued rwts: total=1609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:11.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:11.209 00:36:11.209 Run status group 0 (all jobs): 00:36:11.209 READ: bw=117MiB/s (122MB/s), 36.9MiB/s-39.9MiB/s (38.7MB/s-41.8MB/s), io=589MiB (617MB), run=5046-5046msec 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 bdev_null0 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 [2024-10-11 12:09:54.919346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 bdev_null1 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.209 bdev_null2 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.209 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:11.210 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.210 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.210 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.210 12:09:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:11.210 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.210 12:09:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:11.210 { 00:36:11.210 "params": { 00:36:11.210 "name": "Nvme$subsystem", 00:36:11.210 "trtype": "$TEST_TRANSPORT", 00:36:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.210 "adrfam": "ipv4", 00:36:11.210 "trsvcid": "$NVMF_PORT", 00:36:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.210 "hdgst": ${hdgst:-false}, 00:36:11.210 "ddgst": ${ddgst:-false} 00:36:11.210 }, 00:36:11.210 "method": "bdev_nvme_attach_controller" 00:36:11.210 } 00:36:11.210 EOF 00:36:11.210 )") 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:11.210 { 00:36:11.210 "params": { 00:36:11.210 "name": "Nvme$subsystem", 00:36:11.210 "trtype": "$TEST_TRANSPORT", 00:36:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.210 "adrfam": "ipv4", 00:36:11.210 "trsvcid": "$NVMF_PORT", 00:36:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.210 "hdgst": ${hdgst:-false}, 00:36:11.210 "ddgst": ${ddgst:-false} 00:36:11.210 }, 00:36:11.210 "method": "bdev_nvme_attach_controller" 00:36:11.210 } 00:36:11.210 EOF 00:36:11.210 )") 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:11.210 { 00:36:11.210 "params": { 00:36:11.210 "name": "Nvme$subsystem", 00:36:11.210 "trtype": "$TEST_TRANSPORT", 00:36:11.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.210 "adrfam": "ipv4", 00:36:11.210 "trsvcid": "$NVMF_PORT", 00:36:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.210 "hdgst": ${hdgst:-false}, 00:36:11.210 "ddgst": ${ddgst:-false} 00:36:11.210 }, 00:36:11.210 "method": "bdev_nvme_attach_controller" 00:36:11.210 } 00:36:11.210 EOF 00:36:11.210 )") 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:11.210 "params": { 00:36:11.210 "name": "Nvme0", 00:36:11.210 "trtype": "tcp", 00:36:11.210 "traddr": "10.0.0.2", 00:36:11.210 "adrfam": "ipv4", 00:36:11.210 "trsvcid": "4420", 00:36:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:11.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:11.210 "hdgst": false, 00:36:11.210 "ddgst": false 00:36:11.210 }, 00:36:11.210 "method": "bdev_nvme_attach_controller" 00:36:11.210 },{ 00:36:11.210 "params": { 00:36:11.210 "name": "Nvme1", 00:36:11.210 "trtype": "tcp", 00:36:11.210 "traddr": "10.0.0.2", 00:36:11.210 "adrfam": "ipv4", 00:36:11.210 "trsvcid": "4420", 00:36:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:11.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:11.210 "hdgst": false, 00:36:11.210 "ddgst": false 00:36:11.210 }, 00:36:11.210 "method": "bdev_nvme_attach_controller" 00:36:11.210 },{ 00:36:11.210 "params": { 00:36:11.210 "name": "Nvme2", 00:36:11.210 "trtype": "tcp", 00:36:11.210 "traddr": "10.0.0.2", 00:36:11.210 "adrfam": "ipv4", 00:36:11.210 "trsvcid": "4420", 00:36:11.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:11.210 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:11.210 "hdgst": false, 00:36:11.210 "ddgst": false 00:36:11.210 }, 00:36:11.210 "method": "bdev_nvme_attach_controller" 00:36:11.210 }' 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:11.210 12:09:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:11.210 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:11.210 ... 00:36:11.210 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:11.210 ... 00:36:11.210 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:11.210 ... 00:36:11.210 fio-3.35 00:36:11.210 Starting 24 threads 00:36:23.451 00:36:23.451 filename0: (groupid=0, jobs=1): err= 0: pid=1321964: Fri Oct 11 12:10:06 2024 00:36:23.451 read: IOPS=668, BW=2675KiB/s (2739kB/s)(26.5MiB/10145msec) 00:36:23.451 slat (usec): min=5, max=115, avg=13.05, stdev=10.89 00:36:23.452 clat (msec): min=3, max=149, avg=23.81, stdev= 6.35 00:36:23.452 lat (msec): min=3, max=149, avg=23.82, stdev= 6.35 00:36:23.452 clat percentiles (msec): 00:36:23.452 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.452 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.452 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.452 | 99.00th=[ 26], 99.50th=[ 30], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.452 | 99.99th=[ 150] 00:36:23.452 bw ( KiB/s): min= 2554, max= 2944, per=4.25%, avg=2706.30, stdev=74.71, samples=20 00:36:23.452 iops : min= 638, max= 736, avg=676.50, stdev=18.75, samples=20 00:36:23.452 lat (msec) : 4=0.03%, 10=0.41%, 20=2.14%, 50=97.18%, 250=0.24% 00:36:23.452 cpu : usr=98.68%, sys=0.90%, ctx=39, majf=0, minf=35 00:36:23.452 IO depths : 1=5.7%, 2=11.8%, 4=24.4%, 8=51.4%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:23.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.452 filename0: (groupid=0, jobs=1): err= 0: pid=1321965: Fri Oct 11 12:10:06 2024 00:36:23.452 read: IOPS=662, BW=2651KiB/s (2714kB/s)(26.2MiB/10116msec) 00:36:23.452 slat (nsec): min=5842, max=80838, avg=10429.47, stdev=6939.45 00:36:23.452 clat (msec): min=16, max=143, avg=24.06, stdev= 5.88 00:36:23.452 lat (msec): min=16, max=143, avg=24.07, stdev= 5.88 00:36:23.452 clat percentiles (msec): 00:36:23.452 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.452 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.452 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.452 | 99.00th=[ 25], 99.50th=[ 31], 99.90th=[ 144], 99.95th=[ 144], 00:36:23.452 | 99.99th=[ 144] 00:36:23.452 bw ( KiB/s): min= 2534, max= 2816, per=4.20%, avg=2673.30, stdev=60.76, samples=20 00:36:23.452 iops : min= 633, max= 704, avg=668.25, stdev=15.30, samples=20 00:36:23.452 lat (msec) : 20=0.48%, 50=99.28%, 250=0.24% 00:36:23.452 cpu : usr=98.61%, sys=0.93%, ctx=81, majf=0, minf=23 00:36:23.452 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:23.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.452 filename0: (groupid=0, jobs=1): err= 0: pid=1321966: Fri Oct 11 12:10:06 2024 00:36:23.452 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.3MiB/10141msec) 00:36:23.452 slat (usec): min=5, max=112, avg=29.78, stdev=18.38 00:36:23.452 clat (msec): min=11, max=150, avg=23.84, stdev= 6.20 00:36:23.452 lat (msec): min=11, max=150, avg=23.86, stdev= 6.20 00:36:23.452 clat percentiles (msec): 00:36:23.452 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.452 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.452 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 24], 00:36:23.452 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.452 | 99.99th=[ 150] 00:36:23.452 bw ( KiB/s): min= 2554, max= 2821, per=4.22%, avg=2687.35, stdev=43.35, samples=20 00:36:23.452 iops : min= 638, max= 705, avg=671.75, stdev=10.89, samples=20 00:36:23.452 lat (msec) : 20=0.71%, 50=99.05%, 250=0.24% 00:36:23.452 cpu : usr=99.03%, sys=0.67%, ctx=56, majf=0, minf=20 00:36:23.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:23.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.452 filename0: (groupid=0, jobs=1): err= 0: pid=1321967: Fri Oct 11 12:10:06 2024 00:36:23.452 read: IOPS=664, BW=2656KiB/s (2720kB/s)(26.3MiB/10143msec) 00:36:23.452 slat (usec): min=6, max=111, avg=33.94, stdev=16.82 00:36:23.452 clat (msec): min=11, max=151, avg=23.78, stdev= 6.22 00:36:23.452 lat (msec): min=11, max=151, avg=23.82, stdev= 6.22 00:36:23.452 clat percentiles (msec): 00:36:23.452 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.452 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.452 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 24], 00:36:23.452 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.452 | 99.99th=[ 153] 00:36:23.452 bw ( KiB/s): min= 2554, max= 2821, per=4.22%, avg=2687.35, stdev=43.35, samples=20 00:36:23.452 iops : min= 638, max= 705, avg=671.75, stdev=10.89, samples=20 00:36:23.452 lat (msec) : 20=0.71%, 50=99.05%, 250=0.24% 00:36:23.452 cpu : usr=98.67%, sys=0.91%, ctx=40, majf=0, minf=24 00:36:23.452 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:23.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.452 filename0: (groupid=0, jobs=1): err= 0: pid=1321968: Fri Oct 11 12:10:06 2024 00:36:23.452 read: IOPS=661, BW=2645KiB/s (2708kB/s)(26.1MiB/10091msec) 00:36:23.452 slat (nsec): min=5821, max=58619, avg=12199.83, stdev=7250.09 00:36:23.452 clat (msec): min=17, max=143, avg=24.09, stdev= 6.00 00:36:23.452 lat (msec): min=17, max=143, avg=24.10, stdev= 6.00 00:36:23.452 clat percentiles (msec): 00:36:23.452 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.452 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.452 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.452 | 99.00th=[ 27], 99.50th=[ 31], 99.90th=[ 144], 99.95th=[ 144], 00:36:23.452 | 99.99th=[ 144] 00:36:23.452 bw ( KiB/s): min= 2432, max= 2816, per=4.18%, avg=2662.30, stdev=88.45, samples=20 00:36:23.452 iops : min= 608, max= 704, avg=665.55, stdev=22.11, samples=20 00:36:23.452 lat (msec) : 20=0.42%, 50=99.34%, 250=0.24% 00:36:23.452 cpu : usr=98.94%, sys=0.79%, ctx=13, majf=0, minf=20 00:36:23.452 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:36:23.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.452 filename0: (groupid=0, jobs=1): err= 0: pid=1321969: Fri Oct 11 12:10:06 2024 00:36:23.452 read: IOPS=659, BW=2640KiB/s (2703kB/s)(26.0MiB/10104msec) 00:36:23.452 slat (nsec): min=6171, max=93133, avg=26840.51, stdev=13493.71 00:36:23.452 clat (msec): min=15, max=151, avg=24.01, stdev= 6.41 00:36:23.452 lat (msec): min=15, max=151, avg=24.03, stdev= 6.41 00:36:23.452 clat percentiles (msec): 00:36:23.452 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.452 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.452 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 24], 00:36:23.452 | 99.00th=[ 25], 99.50th=[ 41], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.452 | 99.99th=[ 153] 00:36:23.452 bw ( KiB/s): min= 2432, max= 2816, per=4.18%, avg=2660.70, stdev=88.25, samples=20 00:36:23.452 iops : min= 608, max= 704, avg=665.15, stdev=22.06, samples=20 00:36:23.452 lat (msec) : 20=0.36%, 50=99.31%, 100=0.09%, 250=0.24% 00:36:23.452 cpu : usr=98.45%, sys=0.99%, ctx=163, majf=0, minf=16 00:36:23.452 IO depths : 1=5.3%, 2=10.6%, 4=21.4%, 8=54.6%, 16=8.1%, 32=0.0%, >=64=0.0% 00:36:23.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 complete : 0=0.0%, 4=93.4%, 8=1.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 issued rwts: total=6668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.452 filename0: (groupid=0, jobs=1): err= 0: pid=1321970: Fri Oct 11 12:10:06 2024 00:36:23.452 read: IOPS=667, BW=2669KiB/s (2733kB/s)(26.4MiB/10145msec) 00:36:23.452 slat (nsec): min=5818, max=59644, avg=8688.30, stdev=4105.11 00:36:23.452 clat (msec): min=6, max=148, avg=23.90, stdev= 6.12 00:36:23.452 lat (msec): min=6, max=148, avg=23.91, stdev= 6.12 00:36:23.452 clat percentiles (msec): 00:36:23.452 | 1.00th=[ 15], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.452 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.452 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.452 | 99.00th=[ 25], 99.50th=[ 26], 99.90th=[ 146], 99.95th=[ 146], 00:36:23.452 | 99.99th=[ 148] 00:36:23.452 bw ( KiB/s): min= 2554, max= 2938, per=4.24%, avg=2699.60, stdev=70.42, samples=20 00:36:23.452 iops : min= 638, max= 734, avg=674.80, stdev=17.58, samples=20 00:36:23.452 lat (msec) : 10=0.47%, 20=0.95%, 50=98.35%, 250=0.24% 00:36:23.452 cpu : usr=98.29%, sys=1.16%, ctx=129, majf=0, minf=35 00:36:23.452 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:23.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.452 filename0: (groupid=0, jobs=1): err= 0: pid=1321971: Fri Oct 11 12:10:06 2024 00:36:23.452 read: IOPS=670, BW=2683KiB/s (2747kB/s)(26.6MiB/10145msec) 00:36:23.452 slat (nsec): min=5859, max=97933, avg=26254.36, stdev=15258.00 00:36:23.452 clat (msec): min=7, max=151, avg=23.64, stdev= 6.71 00:36:23.452 lat (msec): min=7, max=151, avg=23.66, stdev= 6.71 00:36:23.452 clat percentiles (msec): 00:36:23.452 | 1.00th=[ 14], 5.00th=[ 19], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.452 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.452 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.452 | 99.00th=[ 35], 99.50th=[ 39], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.452 | 99.99th=[ 153] 00:36:23.452 bw ( KiB/s): min= 2602, max= 2970, per=4.26%, avg=2714.00, stdev=101.69, samples=20 00:36:23.452 iops : min= 650, max= 742, avg=678.40, stdev=25.28, samples=20 00:36:23.452 lat (msec) : 10=0.41%, 20=5.64%, 50=93.71%, 250=0.24% 00:36:23.452 cpu : usr=98.92%, sys=0.81%, ctx=12, majf=0, minf=18 00:36:23.452 IO depths : 1=4.8%, 2=9.7%, 4=20.4%, 8=57.1%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:23.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 complete : 0=0.0%, 4=92.9%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.452 issued rwts: total=6804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.452 filename1: (groupid=0, jobs=1): err= 0: pid=1321972: Fri Oct 11 12:10:06 2024 00:36:23.452 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.3MiB/10141msec) 00:36:23.452 slat (nsec): min=5791, max=97275, avg=21461.31, stdev=14002.98 00:36:23.452 clat (msec): min=11, max=149, avg=23.92, stdev= 6.20 00:36:23.453 lat (msec): min=12, max=149, avg=23.94, stdev= 6.20 00:36:23.453 clat percentiles (msec): 00:36:23.453 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.453 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.453 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.453 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.453 | 99.99th=[ 150] 00:36:23.453 bw ( KiB/s): min= 2554, max= 2821, per=4.22%, avg=2687.35, stdev=46.06, samples=20 00:36:23.453 iops : min= 638, max= 705, avg=671.75, stdev=11.56, samples=20 00:36:23.453 lat (msec) : 20=0.74%, 50=99.02%, 250=0.24% 00:36:23.453 cpu : usr=99.08%, sys=0.63%, ctx=24, majf=0, minf=20 00:36:23.453 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:23.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.453 filename1: (groupid=0, jobs=1): err= 0: pid=1321973: Fri Oct 11 12:10:06 2024 00:36:23.453 read: IOPS=664, BW=2656KiB/s (2720kB/s)(26.3MiB/10143msec) 00:36:23.453 slat (nsec): min=5836, max=99571, avg=27605.05, stdev=17423.81 00:36:23.453 clat (msec): min=11, max=152, avg=23.87, stdev= 6.22 00:36:23.453 lat (msec): min=11, max=152, avg=23.89, stdev= 6.21 00:36:23.453 clat percentiles (msec): 00:36:23.453 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.453 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.453 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 24], 00:36:23.453 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.453 | 99.99th=[ 153] 00:36:23.453 bw ( KiB/s): min= 2554, max= 2821, per=4.22%, avg=2687.35, stdev=43.35, samples=20 00:36:23.453 iops : min= 638, max= 705, avg=671.75, stdev=10.89, samples=20 00:36:23.453 lat (msec) : 20=0.71%, 50=99.05%, 250=0.24% 00:36:23.453 cpu : usr=98.69%, sys=0.85%, ctx=161, majf=0, minf=17 00:36:23.453 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:23.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.453 filename1: (groupid=0, jobs=1): err= 0: pid=1321974: Fri Oct 11 12:10:06 2024 00:36:23.453 read: IOPS=660, BW=2641KiB/s (2704kB/s)(26.1MiB/10106msec) 00:36:23.453 slat (usec): min=6, max=104, avg=27.24, stdev=13.90 00:36:23.453 clat (msec): min=22, max=152, avg=23.97, stdev= 6.32 00:36:23.453 lat (msec): min=22, max=152, avg=24.00, stdev= 6.32 00:36:23.453 clat percentiles (msec): 00:36:23.453 | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.453 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.453 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 24], 00:36:23.453 | 99.00th=[ 25], 99.50th=[ 26], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.453 | 99.99th=[ 153] 00:36:23.453 bw ( KiB/s): min= 2432, max= 2816, per=4.18%, avg=2662.10, stdev=88.99, samples=20 00:36:23.453 iops : min= 608, max= 704, avg=665.50, stdev=22.24, samples=20 00:36:23.453 lat (msec) : 50=99.76%, 250=0.24% 00:36:23.453 cpu : usr=98.47%, sys=1.02%, ctx=130, majf=0, minf=20 00:36:23.453 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:23.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.453 filename1: (groupid=0, jobs=1): err= 0: pid=1321975: Fri Oct 11 12:10:06 2024 00:36:23.453 read: IOPS=661, BW=2648KiB/s (2711kB/s)(26.1MiB/10103msec) 00:36:23.453 slat (usec): min=5, max=118, avg=28.72, stdev=18.65 00:36:23.453 clat (msec): min=7, max=150, avg=23.89, stdev= 6.45 00:36:23.453 lat (msec): min=7, max=150, avg=23.92, stdev= 6.45 00:36:23.453 clat percentiles (msec): 00:36:23.453 | 1.00th=[ 17], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.453 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.453 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.453 | 99.00th=[ 33], 99.50th=[ 43], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.453 | 99.99th=[ 150] 00:36:23.453 bw ( KiB/s): min= 2432, max= 2816, per=4.19%, avg=2668.30, stdev=87.01, samples=20 00:36:23.453 iops : min= 608, max= 704, avg=667.05, stdev=21.75, samples=20 00:36:23.453 lat (msec) : 10=0.12%, 20=1.05%, 50=98.59%, 250=0.24% 00:36:23.453 cpu : usr=98.66%, sys=0.88%, ctx=73, majf=0, minf=24 00:36:23.453 IO depths : 1=5.9%, 2=11.8%, 4=24.0%, 8=51.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:23.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 issued rwts: total=6687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.453 filename1: (groupid=0, jobs=1): err= 0: pid=1321976: Fri Oct 11 12:10:06 2024 00:36:23.453 read: IOPS=660, BW=2644KiB/s (2707kB/s)(26.1MiB/10118msec) 00:36:23.453 slat (usec): min=5, max=117, avg=28.74, stdev=18.29 00:36:23.453 clat (msec): min=9, max=150, avg=23.91, stdev= 6.15 00:36:23.453 lat (msec): min=9, max=150, avg=23.94, stdev= 6.15 00:36:23.453 clat percentiles (msec): 00:36:23.453 | 1.00th=[ 17], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.453 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.453 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 26], 00:36:23.453 | 99.00th=[ 34], 99.50th=[ 39], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.453 | 99.99th=[ 150] 00:36:23.453 bw ( KiB/s): min= 2474, max= 2794, per=4.19%, avg=2666.60, stdev=63.26, samples=20 00:36:23.453 iops : min= 618, max= 698, avg=666.60, stdev=15.84, samples=20 00:36:23.453 lat (msec) : 10=0.06%, 20=2.30%, 50=97.32%, 100=0.12%, 250=0.19% 00:36:23.453 cpu : usr=99.03%, sys=0.67%, ctx=13, majf=0, minf=23 00:36:23.453 IO depths : 1=4.7%, 2=9.4%, 4=20.2%, 8=57.1%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:23.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 complete : 0=0.0%, 4=93.0%, 8=2.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 issued rwts: total=6687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.453 filename1: (groupid=0, jobs=1): err= 0: pid=1321977: Fri Oct 11 12:10:06 2024 00:36:23.453 read: IOPS=665, BW=2663KiB/s (2727kB/s)(26.4MiB/10142msec) 00:36:23.453 slat (usec): min=5, max=122, avg=13.59, stdev=13.60 00:36:23.453 clat (msec): min=7, max=149, avg=23.92, stdev= 6.25 00:36:23.453 lat (msec): min=7, max=149, avg=23.94, stdev= 6.25 00:36:23.453 clat percentiles (msec): 00:36:23.453 | 1.00th=[ 17], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.453 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.453 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.453 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.453 | 99.99th=[ 150] 00:36:23.453 bw ( KiB/s): min= 2554, max= 2938, per=4.23%, avg=2693.20, stdev=64.88, samples=20 00:36:23.453 iops : min= 638, max= 734, avg=673.20, stdev=16.19, samples=20 00:36:23.453 lat (msec) : 10=0.24%, 20=0.95%, 50=98.58%, 250=0.24% 00:36:23.453 cpu : usr=98.92%, sys=0.79%, ctx=18, majf=0, minf=24 00:36:23.453 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:23.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.453 filename1: (groupid=0, jobs=1): err= 0: pid=1321978: Fri Oct 11 12:10:06 2024 00:36:23.453 read: IOPS=664, BW=2656KiB/s (2720kB/s)(26.3MiB/10143msec) 00:36:23.453 slat (usec): min=5, max=104, avg=28.78, stdev=18.06 00:36:23.453 clat (msec): min=6, max=148, avg=23.83, stdev= 6.22 00:36:23.453 lat (msec): min=6, max=148, avg=23.85, stdev= 6.22 00:36:23.453 clat percentiles (msec): 00:36:23.453 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.453 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.453 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.453 | 99.00th=[ 26], 99.50th=[ 28], 99.90th=[ 148], 99.95th=[ 148], 00:36:23.453 | 99.99th=[ 148] 00:36:23.453 bw ( KiB/s): min= 2554, max= 2821, per=4.22%, avg=2687.35, stdev=43.35, samples=20 00:36:23.453 iops : min= 638, max= 705, avg=671.75, stdev=10.89, samples=20 00:36:23.453 lat (msec) : 10=0.18%, 20=0.77%, 50=98.81%, 250=0.24% 00:36:23.453 cpu : usr=98.61%, sys=0.88%, ctx=107, majf=0, minf=21 00:36:23.453 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:23.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.453 filename1: (groupid=0, jobs=1): err= 0: pid=1321979: Fri Oct 11 12:10:06 2024 00:36:23.453 read: IOPS=659, BW=2638KiB/s (2701kB/s)(26.0MiB/10104msec) 00:36:23.453 slat (nsec): min=5829, max=77465, avg=18998.25, stdev=12655.09 00:36:23.453 clat (msec): min=8, max=150, avg=24.09, stdev= 6.54 00:36:23.453 lat (msec): min=8, max=150, avg=24.11, stdev= 6.54 00:36:23.453 clat percentiles (msec): 00:36:23.453 | 1.00th=[ 20], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.453 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.453 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.453 | 99.00th=[ 36], 99.50th=[ 43], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.453 | 99.99th=[ 150] 00:36:23.453 bw ( KiB/s): min= 2432, max= 2816, per=4.18%, avg=2658.90, stdev=86.84, samples=20 00:36:23.453 iops : min= 608, max= 704, avg=664.70, stdev=21.73, samples=20 00:36:23.453 lat (msec) : 10=0.29%, 20=0.81%, 50=98.66%, 250=0.24% 00:36:23.453 cpu : usr=98.54%, sys=0.98%, ctx=151, majf=0, minf=19 00:36:23.453 IO depths : 1=5.4%, 2=11.2%, 4=23.4%, 8=52.7%, 16=7.2%, 32=0.0%, >=64=0.0% 00:36:23.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.453 issued rwts: total=6664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.453 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.453 filename2: (groupid=0, jobs=1): err= 0: pid=1321980: Fri Oct 11 12:10:06 2024 00:36:23.453 read: IOPS=690, BW=2762KiB/s (2828kB/s)(27.0MiB/10011msec) 00:36:23.453 slat (usec): min=5, max=108, avg=27.97, stdev=18.45 00:36:23.453 clat (usec): min=854, max=36636, avg=22954.97, stdev=3956.28 00:36:23.453 lat (usec): min=872, max=36659, avg=22982.93, stdev=3958.62 00:36:23.454 clat percentiles (usec): 00:36:23.454 | 1.00th=[ 1369], 5.00th=[17433], 10.00th=[23200], 20.00th=[23200], 00:36:23.454 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:23.454 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:36:23.454 | 99.00th=[31065], 99.50th=[31327], 99.90th=[36439], 99.95th=[36439], 00:36:23.454 | 99.99th=[36439] 00:36:23.454 bw ( KiB/s): min= 2682, max= 4215, per=4.35%, avg=2768.05, stdev=350.40, samples=19 00:36:23.454 iops : min= 670, max= 1053, avg=691.95, stdev=87.43, samples=19 00:36:23.454 lat (usec) : 1000=0.03% 00:36:23.454 lat (msec) : 2=2.24%, 4=0.04%, 10=0.46%, 20=3.36%, 50=93.87% 00:36:23.454 cpu : usr=98.93%, sys=0.78%, ctx=24, majf=0, minf=26 00:36:23.454 IO depths : 1=4.0%, 2=9.2%, 4=23.2%, 8=55.0%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:23.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 issued rwts: total=6912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.454 filename2: (groupid=0, jobs=1): err= 0: pid=1321981: Fri Oct 11 12:10:06 2024 00:36:23.454 read: IOPS=660, BW=2641KiB/s (2705kB/s)(26.1MiB/10104msec) 00:36:23.454 slat (nsec): min=5827, max=86367, avg=21644.69, stdev=12807.53 00:36:23.454 clat (msec): min=14, max=150, avg=24.02, stdev= 6.33 00:36:23.454 lat (msec): min=14, max=150, avg=24.05, stdev= 6.33 00:36:23.454 clat percentiles (msec): 00:36:23.454 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.454 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.454 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 24], 00:36:23.454 | 99.00th=[ 28], 99.50th=[ 35], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.454 | 99.99th=[ 150] 00:36:23.454 bw ( KiB/s): min= 2432, max= 2816, per=4.18%, avg=2662.10, stdev=89.44, samples=20 00:36:23.454 iops : min= 608, max= 704, avg=665.50, stdev=22.39, samples=20 00:36:23.454 lat (msec) : 20=0.18%, 50=99.58%, 250=0.24% 00:36:23.454 cpu : usr=99.02%, sys=0.70%, ctx=10, majf=0, minf=30 00:36:23.454 IO depths : 1=5.8%, 2=11.8%, 4=24.3%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:23.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.454 filename2: (groupid=0, jobs=1): err= 0: pid=1321982: Fri Oct 11 12:10:06 2024 00:36:23.454 read: IOPS=665, BW=2663KiB/s (2727kB/s)(26.4MiB/10142msec) 00:36:23.454 slat (usec): min=5, max=109, avg=28.12, stdev=19.94 00:36:23.454 clat (msec): min=7, max=149, avg=23.80, stdev= 6.26 00:36:23.454 lat (msec): min=7, max=149, avg=23.83, stdev= 6.26 00:36:23.454 clat percentiles (msec): 00:36:23.454 | 1.00th=[ 17], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.454 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.454 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 24], 00:36:23.454 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.454 | 99.99th=[ 150] 00:36:23.454 bw ( KiB/s): min= 2554, max= 2938, per=4.23%, avg=2693.20, stdev=64.88, samples=20 00:36:23.454 iops : min= 638, max= 734, avg=673.20, stdev=16.19, samples=20 00:36:23.454 lat (msec) : 10=0.24%, 20=0.95%, 50=98.58%, 250=0.24% 00:36:23.454 cpu : usr=98.80%, sys=0.85%, ctx=105, majf=0, minf=22 00:36:23.454 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:23.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.454 filename2: (groupid=0, jobs=1): err= 0: pid=1321983: Fri Oct 11 12:10:06 2024 00:36:23.454 read: IOPS=662, BW=2649KiB/s (2713kB/s)(26.2MiB/10122msec) 00:36:23.454 slat (usec): min=6, max=105, avg=31.83, stdev=14.82 00:36:23.454 clat (msec): min=16, max=150, avg=23.88, stdev= 6.18 00:36:23.454 lat (msec): min=16, max=150, avg=23.91, stdev= 6.18 00:36:23.454 clat percentiles (msec): 00:36:23.454 | 1.00th=[ 23], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.454 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.454 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 24], 00:36:23.454 | 99.00th=[ 25], 99.50th=[ 25], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.454 | 99.99th=[ 150] 00:36:23.454 bw ( KiB/s): min= 2554, max= 2688, per=4.20%, avg=2674.90, stdev=40.33, samples=20 00:36:23.454 iops : min= 638, max= 672, avg=668.70, stdev=10.16, samples=20 00:36:23.454 lat (msec) : 20=0.24%, 50=99.52%, 250=0.24% 00:36:23.454 cpu : usr=98.97%, sys=0.76%, ctx=13, majf=0, minf=22 00:36:23.454 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:23.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.454 filename2: (groupid=0, jobs=1): err= 0: pid=1321984: Fri Oct 11 12:10:06 2024 00:36:23.454 read: IOPS=661, BW=2646KiB/s (2709kB/s)(26.1MiB/10108msec) 00:36:23.454 slat (usec): min=5, max=116, avg=28.44, stdev=19.96 00:36:23.454 clat (msec): min=8, max=150, avg=23.97, stdev= 6.76 00:36:23.454 lat (msec): min=8, max=150, avg=24.00, stdev= 6.76 00:36:23.454 clat percentiles (msec): 00:36:23.454 | 1.00th=[ 16], 5.00th=[ 20], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.454 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.454 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 25], 95.00th=[ 28], 00:36:23.454 | 99.00th=[ 36], 99.50th=[ 40], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.454 | 99.99th=[ 150] 00:36:23.454 bw ( KiB/s): min= 2432, max= 2778, per=4.19%, avg=2667.70, stdev=75.98, samples=20 00:36:23.454 iops : min= 608, max= 694, avg=666.90, stdev=18.96, samples=20 00:36:23.454 lat (msec) : 10=0.06%, 20=5.10%, 50=94.47%, 100=0.13%, 250=0.24% 00:36:23.454 cpu : usr=98.29%, sys=1.12%, ctx=266, majf=0, minf=18 00:36:23.454 IO depths : 1=2.0%, 2=5.4%, 4=16.7%, 8=63.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:36:23.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 complete : 0=0.0%, 4=92.7%, 8=3.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 issued rwts: total=6686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.454 filename2: (groupid=0, jobs=1): err= 0: pid=1321985: Fri Oct 11 12:10:06 2024 00:36:23.454 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.4MiB/10144msec) 00:36:23.454 slat (nsec): min=5839, max=75655, avg=12228.70, stdev=9350.86 00:36:23.454 clat (msec): min=7, max=148, avg=23.93, stdev= 6.07 00:36:23.454 lat (msec): min=7, max=148, avg=23.95, stdev= 6.07 00:36:23.454 clat percentiles (msec): 00:36:23.454 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.454 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.454 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.454 | 99.00th=[ 26], 99.50th=[ 26], 99.90th=[ 146], 99.95th=[ 146], 00:36:23.454 | 99.99th=[ 148] 00:36:23.454 bw ( KiB/s): min= 2554, max= 2816, per=4.23%, avg=2693.50, stdev=49.53, samples=20 00:36:23.454 iops : min= 638, max= 704, avg=673.30, stdev=12.47, samples=20 00:36:23.454 lat (msec) : 10=0.24%, 20=0.71%, 50=98.82%, 250=0.24% 00:36:23.454 cpu : usr=99.13%, sys=0.57%, ctx=26, majf=0, minf=23 00:36:23.454 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:23.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.454 filename2: (groupid=0, jobs=1): err= 0: pid=1321986: Fri Oct 11 12:10:06 2024 00:36:23.454 read: IOPS=659, BW=2637KiB/s (2701kB/s)(26.0MiB/10106msec) 00:36:23.454 slat (nsec): min=5824, max=82197, avg=19723.87, stdev=13445.89 00:36:23.454 clat (msec): min=16, max=152, avg=24.09, stdev= 5.92 00:36:23.454 lat (msec): min=16, max=152, avg=24.11, stdev= 5.91 00:36:23.454 clat percentiles (msec): 00:36:23.454 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.454 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.454 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.454 | 99.00th=[ 31], 99.50th=[ 36], 99.90th=[ 148], 99.95th=[ 153], 00:36:23.454 | 99.99th=[ 153] 00:36:23.454 bw ( KiB/s): min= 2408, max= 2816, per=4.18%, avg=2658.50, stdev=91.65, samples=20 00:36:23.454 iops : min= 602, max= 704, avg=664.60, stdev=22.91, samples=20 00:36:23.454 lat (msec) : 20=0.54%, 50=99.22%, 100=0.05%, 250=0.20% 00:36:23.454 cpu : usr=99.10%, sys=0.59%, ctx=43, majf=0, minf=20 00:36:23.454 IO depths : 1=2.9%, 2=5.8%, 4=12.2%, 8=66.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:36:23.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 complete : 0=0.0%, 4=91.6%, 8=5.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 issued rwts: total=6663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.454 filename2: (groupid=0, jobs=1): err= 0: pid=1321987: Fri Oct 11 12:10:06 2024 00:36:23.454 read: IOPS=663, BW=2655KiB/s (2719kB/s)(26.2MiB/10112msec) 00:36:23.454 slat (nsec): min=5789, max=72879, avg=17589.95, stdev=10684.29 00:36:23.454 clat (msec): min=10, max=150, avg=23.96, stdev= 6.42 00:36:23.454 lat (msec): min=10, max=150, avg=23.97, stdev= 6.42 00:36:23.454 clat percentiles (msec): 00:36:23.454 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], 00:36:23.454 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:36:23.454 | 70.00th=[ 24], 80.00th=[ 24], 90.00th=[ 24], 95.00th=[ 25], 00:36:23.454 | 99.00th=[ 35], 99.50th=[ 37], 99.90th=[ 150], 99.95th=[ 150], 00:36:23.454 | 99.99th=[ 150] 00:36:23.454 bw ( KiB/s): min= 2432, max= 2858, per=4.21%, avg=2677.80, stdev=81.29, samples=20 00:36:23.454 iops : min= 608, max= 714, avg=669.40, stdev=20.29, samples=20 00:36:23.454 lat (msec) : 20=2.10%, 50=97.66%, 250=0.24% 00:36:23.454 cpu : usr=98.64%, sys=0.91%, ctx=109, majf=0, minf=20 00:36:23.454 IO depths : 1=5.5%, 2=11.2%, 4=23.5%, 8=52.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:36:23.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:23.454 issued rwts: total=6712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:23.454 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:23.454 00:36:23.454 Run status group 0 (all jobs): 00:36:23.454 READ: bw=62.1MiB/s (65.2MB/s), 2637KiB/s-2762KiB/s (2701kB/s-2828kB/s), io=630MiB (661MB), run=10011-10145msec 00:36:23.454 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 bdev_null0 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 [2024-10-11 12:10:06.752797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 bdev_null1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:23.455 { 00:36:23.455 "params": { 00:36:23.455 "name": "Nvme$subsystem", 00:36:23.455 "trtype": "$TEST_TRANSPORT", 00:36:23.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:23.455 "adrfam": "ipv4", 00:36:23.455 "trsvcid": "$NVMF_PORT", 00:36:23.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:23.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:23.455 "hdgst": ${hdgst:-false}, 00:36:23.455 "ddgst": ${ddgst:-false} 00:36:23.455 }, 00:36:23.455 "method": "bdev_nvme_attach_controller" 00:36:23.455 } 00:36:23.455 EOF 00:36:23.455 )") 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:23.455 { 00:36:23.455 "params": { 00:36:23.455 "name": "Nvme$subsystem", 00:36:23.455 "trtype": "$TEST_TRANSPORT", 00:36:23.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:23.455 "adrfam": "ipv4", 00:36:23.455 "trsvcid": "$NVMF_PORT", 00:36:23.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:23.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:23.455 "hdgst": ${hdgst:-false}, 00:36:23.455 "ddgst": ${ddgst:-false} 00:36:23.455 }, 00:36:23.455 "method": "bdev_nvme_attach_controller" 00:36:23.455 } 00:36:23.455 EOF 00:36:23.455 )") 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:23.455 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:23.456 "params": { 00:36:23.456 "name": "Nvme0", 00:36:23.456 "trtype": "tcp", 00:36:23.456 "traddr": "10.0.0.2", 00:36:23.456 "adrfam": "ipv4", 00:36:23.456 "trsvcid": "4420", 00:36:23.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:23.456 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:23.456 "hdgst": false, 00:36:23.456 "ddgst": false 00:36:23.456 }, 00:36:23.456 "method": "bdev_nvme_attach_controller" 00:36:23.456 },{ 00:36:23.456 "params": { 00:36:23.456 "name": "Nvme1", 00:36:23.456 "trtype": "tcp", 00:36:23.456 "traddr": "10.0.0.2", 00:36:23.456 "adrfam": "ipv4", 00:36:23.456 "trsvcid": "4420", 00:36:23.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:23.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:23.456 "hdgst": false, 00:36:23.456 "ddgst": false 00:36:23.456 }, 00:36:23.456 "method": "bdev_nvme_attach_controller" 00:36:23.456 }' 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:23.456 12:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:23.456 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:23.456 ... 00:36:23.456 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:23.456 ... 00:36:23.456 fio-3.35 00:36:23.456 Starting 4 threads 00:36:28.740 00:36:28.740 filename0: (groupid=0, jobs=1): err= 0: pid=1324192: Fri Oct 11 12:10:12 2024 00:36:28.740 read: IOPS=2985, BW=23.3MiB/s (24.5MB/s)(117MiB/5003msec) 00:36:28.740 slat (nsec): min=5643, max=68895, avg=8988.31, stdev=2937.68 00:36:28.740 clat (usec): min=951, max=4852, avg=2655.07, stdev=221.24 00:36:28.740 lat (usec): min=960, max=4860, avg=2664.06, stdev=221.23 00:36:28.740 clat percentiles (usec): 00:36:28.740 | 1.00th=[ 1926], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2606], 00:36:28.740 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:28.740 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2802], 95.00th=[ 2933], 00:36:28.740 | 99.00th=[ 3425], 99.50th=[ 3818], 99.90th=[ 4228], 99.95th=[ 4621], 00:36:28.740 | 99.99th=[ 4817] 00:36:28.740 bw ( KiB/s): min=23520, max=24128, per=25.20%, avg=23902.22, stdev=193.40, samples=9 00:36:28.740 iops : min= 2940, max= 3016, avg=2987.78, stdev=24.18, samples=9 00:36:28.740 lat (usec) : 1000=0.02% 00:36:28.740 lat (msec) : 2=1.19%, 4=98.47%, 10=0.32% 00:36:28.740 cpu : usr=97.04%, sys=2.70%, ctx=6, majf=0, minf=9 00:36:28.740 IO depths : 1=0.1%, 2=0.2%, 4=72.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.740 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.740 issued rwts: total=14935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.740 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:28.740 filename0: (groupid=0, jobs=1): err= 0: pid=1324193: Fri Oct 11 12:10:12 2024 00:36:28.740 read: IOPS=2965, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:36:28.740 slat (nsec): min=8242, max=84768, avg=9328.97, stdev=2990.52 00:36:28.740 clat (usec): min=1194, max=5215, avg=2673.31, stdev=251.35 00:36:28.740 lat (usec): min=1203, max=5223, avg=2682.64, stdev=251.53 00:36:28.740 clat percentiles (usec): 00:36:28.740 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2573], 00:36:28.740 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:28.740 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 2999], 00:36:28.740 | 99.00th=[ 3752], 99.50th=[ 3982], 99.90th=[ 4490], 99.95th=[ 4883], 00:36:28.740 | 99.99th=[ 5211] 00:36:28.740 bw ( KiB/s): min=23216, max=24080, per=25.00%, avg=23706.67, stdev=228.39, samples=9 00:36:28.740 iops : min= 2902, max= 3010, avg=2963.33, stdev=28.55, samples=9 00:36:28.740 lat (msec) : 2=0.94%, 4=98.59%, 10=0.47% 00:36:28.740 cpu : usr=96.22%, sys=3.50%, ctx=8, majf=0, minf=9 00:36:28.740 IO depths : 1=0.1%, 2=0.2%, 4=71.7%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.740 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.740 issued rwts: total=14828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.740 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:28.740 filename1: (groupid=0, jobs=1): err= 0: pid=1324194: Fri Oct 11 12:10:12 2024 00:36:28.740 read: IOPS=2955, BW=23.1MiB/s (24.2MB/s)(115MiB/5001msec) 00:36:28.740 slat (nsec): min=5637, max=51073, avg=6689.93, stdev=2672.73 00:36:28.740 clat (usec): min=908, max=6100, avg=2688.15, stdev=259.67 00:36:28.740 lat (usec): min=914, max=6131, avg=2694.84, stdev=259.91 00:36:28.740 clat percentiles (usec): 00:36:28.740 | 1.00th=[ 2040], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2606], 00:36:28.740 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:28.740 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2868], 95.00th=[ 2999], 00:36:28.740 | 99.00th=[ 3851], 99.50th=[ 4047], 99.90th=[ 4686], 99.95th=[ 4817], 00:36:28.740 | 99.99th=[ 4948] 00:36:28.740 bw ( KiB/s): min=23406, max=23872, per=24.91%, avg=23626.44, stdev=178.48, samples=9 00:36:28.740 iops : min= 2925, max= 2984, avg=2953.22, stdev=22.43, samples=9 00:36:28.740 lat (usec) : 1000=0.03% 00:36:28.740 lat (msec) : 2=0.87%, 4=98.53%, 10=0.56% 00:36:28.740 cpu : usr=97.26%, sys=2.50%, ctx=11, majf=0, minf=9 00:36:28.740 IO depths : 1=0.1%, 2=0.2%, 4=73.7%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.740 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.740 issued rwts: total=14780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.740 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:28.740 filename1: (groupid=0, jobs=1): err= 0: pid=1324195: Fri Oct 11 12:10:12 2024 00:36:28.740 read: IOPS=2952, BW=23.1MiB/s (24.2MB/s)(115MiB/5002msec) 00:36:28.740 slat (nsec): min=5644, max=81171, avg=7814.75, stdev=3280.77 00:36:28.740 clat (usec): min=947, max=4653, avg=2688.23, stdev=274.48 00:36:28.740 lat (usec): min=956, max=4662, avg=2696.05, stdev=274.64 00:36:28.740 clat percentiles (usec): 00:36:28.740 | 1.00th=[ 1958], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2638], 00:36:28.740 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:36:28.740 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2868], 95.00th=[ 3064], 00:36:28.740 | 99.00th=[ 3884], 99.50th=[ 4015], 99.90th=[ 4490], 99.95th=[ 4555], 00:36:28.740 | 99.99th=[ 4621] 00:36:28.740 bw ( KiB/s): min=23328, max=23888, per=24.91%, avg=23623.11, stdev=209.55, samples=9 00:36:28.740 iops : min= 2916, max= 2986, avg=2952.89, stdev=26.19, samples=9 00:36:28.740 lat (usec) : 1000=0.02% 00:36:28.740 lat (msec) : 2=1.13%, 4=98.31%, 10=0.53% 00:36:28.740 cpu : usr=96.66%, sys=3.08%, ctx=7, majf=0, minf=11 00:36:28.740 IO depths : 1=0.1%, 2=0.3%, 4=72.5%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:28.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.740 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:28.740 issued rwts: total=14767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:28.740 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:28.740 00:36:28.740 Run status group 0 (all jobs): 00:36:28.740 READ: bw=92.6MiB/s (97.1MB/s), 23.1MiB/s-23.3MiB/s (24.2MB/s-24.5MB/s), io=463MiB (486MB), run=5001-5003msec 00:36:28.740 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:28.740 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:28.740 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:28.740 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:28.740 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:28.740 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:28.740 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.740 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:28.740 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.741 00:36:28.741 real 0m24.412s 00:36:28.741 user 5m23.800s 00:36:28.741 sys 0m4.508s 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:28.741 12:10:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:28.741 ************************************ 00:36:28.741 END TEST fio_dif_rand_params 00:36:28.741 ************************************ 00:36:28.741 12:10:13 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:28.741 12:10:13 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:28.741 12:10:13 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:28.741 12:10:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:28.741 ************************************ 00:36:28.741 START TEST fio_dif_digest 00:36:28.741 ************************************ 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.741 bdev_null0 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:28.741 [2024-10-11 12:10:13.276580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:28.741 { 00:36:28.741 "params": { 00:36:28.741 "name": "Nvme$subsystem", 00:36:28.741 "trtype": "$TEST_TRANSPORT", 00:36:28.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:28.741 "adrfam": "ipv4", 00:36:28.741 "trsvcid": "$NVMF_PORT", 00:36:28.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:28.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:28.741 "hdgst": ${hdgst:-false}, 00:36:28.741 "ddgst": ${ddgst:-false} 00:36:28.741 }, 00:36:28.741 "method": "bdev_nvme_attach_controller" 00:36:28.741 } 00:36:28.741 EOF 00:36:28.741 )") 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:28.741 "params": { 00:36:28.741 "name": "Nvme0", 00:36:28.741 "trtype": "tcp", 00:36:28.741 "traddr": "10.0.0.2", 00:36:28.741 "adrfam": "ipv4", 00:36:28.741 "trsvcid": "4420", 00:36:28.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:28.741 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:28.741 "hdgst": true, 00:36:28.741 "ddgst": true 00:36:28.741 }, 00:36:28.741 "method": "bdev_nvme_attach_controller" 00:36:28.741 }' 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:28.741 12:10:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:29.333 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:29.333 ... 00:36:29.333 fio-3.35 00:36:29.333 Starting 3 threads 00:36:41.632 00:36:41.632 filename0: (groupid=0, jobs=1): err= 0: pid=1325689: Fri Oct 11 12:10:24 2024 00:36:41.632 read: IOPS=306, BW=38.4MiB/s (40.2MB/s)(385MiB/10048msec) 00:36:41.632 slat (nsec): min=5924, max=59431, avg=8367.01, stdev=2014.27 00:36:41.632 clat (usec): min=5865, max=51514, avg=9753.05, stdev=1337.19 00:36:41.632 lat (usec): min=5873, max=51523, avg=9761.42, stdev=1337.27 00:36:41.632 clat percentiles (usec): 00:36:41.632 | 1.00th=[ 7177], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9110], 00:36:41.633 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:36:41.633 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:36:41.633 | 99.00th=[11731], 99.50th=[11994], 99.90th=[13173], 99.95th=[47973], 00:36:41.633 | 99.99th=[51643] 00:36:41.633 bw ( KiB/s): min=38144, max=42240, per=34.47%, avg=39436.80, stdev=937.76, samples=20 00:36:41.633 iops : min= 298, max= 330, avg=308.10, stdev= 7.33, samples=20 00:36:41.633 lat (msec) : 10=62.93%, 20=37.01%, 50=0.03%, 100=0.03% 00:36:41.633 cpu : usr=94.46%, sys=5.30%, ctx=19, majf=0, minf=177 00:36:41.633 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:41.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.633 issued rwts: total=3083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.633 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:41.633 filename0: (groupid=0, jobs=1): err= 0: pid=1325690: Fri Oct 11 12:10:24 2024 00:36:41.633 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(367MiB/10046msec) 00:36:41.633 slat (usec): min=5, max=279, avg= 8.66, stdev= 5.47 00:36:41.633 clat (usec): min=6783, max=91086, avg=10246.45, stdev=3575.16 00:36:41.633 lat (usec): min=6791, max=91097, avg=10255.11, stdev=3575.22 00:36:41.633 clat percentiles (usec): 00:36:41.633 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:36:41.633 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:36:41.633 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10945], 95.00th=[11338], 00:36:41.633 | 99.00th=[12125], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:36:41.633 | 99.99th=[90702] 00:36:41.633 bw ( KiB/s): min=29184, max=39680, per=32.80%, avg=37529.60, stdev=2380.43, samples=20 00:36:41.633 iops : min= 228, max= 310, avg=293.20, stdev=18.60, samples=20 00:36:41.633 lat (msec) : 10=51.16%, 20=48.19%, 50=0.14%, 100=0.51% 00:36:41.633 cpu : usr=95.08%, sys=4.52%, ctx=226, majf=0, minf=162 00:36:41.633 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:41.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.633 issued rwts: total=2934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.633 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:41.633 filename0: (groupid=0, jobs=1): err= 0: pid=1325691: Fri Oct 11 12:10:24 2024 00:36:41.633 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(371MiB/10046msec) 00:36:41.633 slat (nsec): min=5886, max=36803, avg=7870.48, stdev=1644.66 00:36:41.633 clat (usec): min=6420, max=48337, avg=10140.89, stdev=1312.12 00:36:41.633 lat (usec): min=6428, max=48344, avg=10148.76, stdev=1312.11 00:36:41.633 clat percentiles (usec): 00:36:41.633 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9503], 00:36:41.633 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:36:41.633 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:36:41.633 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13435], 99.95th=[46400], 00:36:41.633 | 99.99th=[48497] 00:36:41.633 bw ( KiB/s): min=36608, max=40704, per=33.15%, avg=37926.40, stdev=891.76, samples=20 00:36:41.633 iops : min= 286, max= 318, avg=296.30, stdev= 6.97, samples=20 00:36:41.633 lat (msec) : 10=41.42%, 20=58.52%, 50=0.07% 00:36:41.633 cpu : usr=95.85%, sys=3.90%, ctx=44, majf=0, minf=218 00:36:41.633 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:41.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:41.633 issued rwts: total=2965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:41.633 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:41.633 00:36:41.633 Run status group 0 (all jobs): 00:36:41.633 READ: bw=112MiB/s (117MB/s), 36.5MiB/s-38.4MiB/s (38.3MB/s-40.2MB/s), io=1123MiB (1177MB), run=10046-10048msec 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.633 00:36:41.633 real 0m11.169s 00:36:41.633 user 0m45.096s 00:36:41.633 sys 0m1.694s 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:41.633 12:10:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:41.633 ************************************ 00:36:41.633 END TEST fio_dif_digest 00:36:41.633 ************************************ 00:36:41.633 12:10:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:41.633 12:10:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:41.633 rmmod nvme_tcp 00:36:41.633 rmmod nvme_fabrics 00:36:41.633 rmmod nvme_keyring 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1315315 ']' 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1315315 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1315315 ']' 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1315315 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1315315 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1315315' 00:36:41.633 killing process with pid 1315315 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1315315 00:36:41.633 12:10:24 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1315315 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:41.633 12:10:24 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:43.546 Waiting for block devices as requested 00:36:43.546 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:43.546 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:43.817 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:43.817 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:43.817 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:43.817 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:44.078 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:44.078 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:44.078 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:44.338 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:44.338 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:44.338 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:44.598 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:44.598 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:44.598 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:44.859 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:44.859 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:44.859 12:10:29 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:44.859 12:10:29 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:44.859 12:10:29 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:44.859 12:10:29 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:36:44.859 12:10:29 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:44.859 12:10:29 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:36:44.859 12:10:29 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:44.859 12:10:29 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:44.859 12:10:29 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:44.859 12:10:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:44.859 12:10:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.403 12:10:31 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:47.403 00:36:47.403 real 1m17.746s 00:36:47.403 user 8m4.171s 00:36:47.403 sys 0m21.553s 00:36:47.403 12:10:31 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:47.403 12:10:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:47.403 ************************************ 00:36:47.403 END TEST nvmf_dif 00:36:47.403 ************************************ 00:36:47.403 12:10:31 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:47.403 12:10:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:47.403 12:10:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:47.403 12:10:31 -- common/autotest_common.sh@10 -- # set +x 00:36:47.403 ************************************ 00:36:47.403 START TEST nvmf_abort_qd_sizes 00:36:47.403 ************************************ 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:47.403 * Looking for test storage... 00:36:47.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:47.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.403 --rc genhtml_branch_coverage=1 00:36:47.403 --rc genhtml_function_coverage=1 00:36:47.403 --rc genhtml_legend=1 00:36:47.403 --rc geninfo_all_blocks=1 00:36:47.403 --rc geninfo_unexecuted_blocks=1 00:36:47.403 00:36:47.403 ' 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:47.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.403 --rc genhtml_branch_coverage=1 00:36:47.403 --rc genhtml_function_coverage=1 00:36:47.403 --rc genhtml_legend=1 00:36:47.403 --rc geninfo_all_blocks=1 00:36:47.403 --rc geninfo_unexecuted_blocks=1 00:36:47.403 00:36:47.403 ' 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:47.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.403 --rc genhtml_branch_coverage=1 00:36:47.403 --rc genhtml_function_coverage=1 00:36:47.403 --rc genhtml_legend=1 00:36:47.403 --rc geninfo_all_blocks=1 00:36:47.403 --rc geninfo_unexecuted_blocks=1 00:36:47.403 00:36:47.403 ' 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:47.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.403 --rc genhtml_branch_coverage=1 00:36:47.403 --rc genhtml_function_coverage=1 00:36:47.403 --rc genhtml_legend=1 00:36:47.403 --rc geninfo_all_blocks=1 00:36:47.403 --rc geninfo_unexecuted_blocks=1 00:36:47.403 00:36:47.403 ' 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.403 12:10:31 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:47.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:47.404 12:10:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:55.540 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:55.540 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:55.540 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:55.540 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:55.540 12:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:55.540 12:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:55.540 12:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:55.540 12:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:55.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:55.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:36:55.540 00:36:55.540 --- 10.0.0.2 ping statistics --- 00:36:55.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:55.540 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:36:55.540 12:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:55.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:55.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:36:55.540 00:36:55.540 --- 10.0.0.1 ping statistics --- 00:36:55.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:55.540 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:36:55.540 12:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:55.540 12:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:36:55.540 12:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:36:55.540 12:10:39 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:58.087 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:58.087 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1334973 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1334973 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1334973 ']' 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:58.087 12:10:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:58.087 [2024-10-11 12:10:42.718369] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:36:58.087 [2024-10-11 12:10:42.718418] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:58.347 [2024-10-11 12:10:42.802106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:58.347 [2024-10-11 12:10:42.840318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:58.347 [2024-10-11 12:10:42.840350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:58.347 [2024-10-11 12:10:42.840359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:58.347 [2024-10-11 12:10:42.840365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:58.347 [2024-10-11 12:10:42.840371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:58.347 [2024-10-11 12:10:42.841932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.347 [2024-10-11 12:10:42.842081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:58.347 [2024-10-11 12:10:42.842231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.347 [2024-10-11 12:10:42.842232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:58.918 12:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:58.918 12:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:36:58.918 12:10:43 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:58.918 12:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:58.918 12:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:59.178 12:10:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:59.178 ************************************ 00:36:59.178 START TEST spdk_target_abort 00:36:59.178 ************************************ 00:36:59.178 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:36:59.178 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:59.178 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:59.178 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.178 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.439 spdk_targetn1 00:36:59.439 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.439 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:59.439 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.439 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.439 [2024-10-11 12:10:43.920929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:59.439 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.440 [2024-10-11 12:10:43.961226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:59.440 12:10:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:59.701 [2024-10-11 12:10:44.147192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:224 len:8 PRP1 0x200004abe000 PRP2 0x0 00:36:59.701 [2024-10-11 12:10:44.147243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:001e p:1 m:0 dnr:0 00:36:59.701 [2024-10-11 12:10:44.148527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:288 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:36:59.701 [2024-10-11 12:10:44.148551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0026 p:1 m:0 dnr:0 00:36:59.701 [2024-10-11 12:10:44.156249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:464 len:8 PRP1 0x200004abe000 PRP2 0x0 00:36:59.701 [2024-10-11 12:10:44.156277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:003d p:1 m:0 dnr:0 00:36:59.701 [2024-10-11 12:10:44.195243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1592 len:8 PRP1 0x200004abe000 PRP2 0x0 00:36:59.701 [2024-10-11 12:10:44.195276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00c8 p:1 m:0 dnr:0 00:36:59.701 [2024-10-11 12:10:44.267169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3696 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:59.701 [2024-10-11 12:10:44.267202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00cf p:0 m:0 dnr:0 00:36:59.701 [2024-10-11 12:10:44.278236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:4072 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:36:59.701 [2024-10-11 12:10:44.278265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:02.999 Initializing NVMe Controllers 00:37:02.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:02.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:02.999 Initialization complete. Launching workers. 00:37:02.999 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12217, failed: 6 00:37:02.999 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2726, failed to submit 9497 00:37:02.999 success 729, unsuccessful 1997, failed 0 00:37:02.999 12:10:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:02.999 12:10:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:02.999 [2024-10-11 12:10:47.439958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:528 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:02.999 [2024-10-11 12:10:47.439997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:37:03.570 [2024-10-11 12:10:47.981015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:13112 len:8 PRP1 0x200004e48000 PRP2 0x0 00:37:03.570 [2024-10-11 12:10:47.981046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:37:06.112 Initializing NVMe Controllers 00:37:06.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:06.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:06.112 Initialization complete. Launching workers. 00:37:06.112 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8536, failed: 2 00:37:06.112 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1233, failed to submit 7305 00:37:06.112 success 351, unsuccessful 882, failed 0 00:37:06.112 12:10:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:06.112 12:10:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:06.112 [2024-10-11 12:10:50.707348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:175 nsid:1 lba:1888 len:8 PRP1 0x200004ade000 PRP2 0x0 00:37:06.112 [2024-10-11 12:10:50.707380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:175 cdw0:0 sqhd:00bf p:1 m:0 dnr:0 00:37:06.112 [2024-10-11 12:10:50.722967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:176 nsid:1 lba:3752 len:8 PRP1 0x200004ad4000 PRP2 0x0 00:37:06.112 [2024-10-11 12:10:50.722984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:176 cdw0:0 sqhd:009e p:0 m:0 dnr:0 00:37:09.411 [2024-10-11 12:10:53.559218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:174 nsid:1 lba:332408 len:8 PRP1 0x200004b24000 PRP2 0x0 00:37:09.411 [2024-10-11 12:10:53.559249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:174 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:37:09.411 Initializing NVMe Controllers 00:37:09.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:09.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:09.411 Initialization complete. Launching workers. 00:37:09.411 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43581, failed: 3 00:37:09.411 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2746, failed to submit 40838 00:37:09.411 success 634, unsuccessful 2112, failed 0 00:37:09.411 12:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:09.411 12:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.411 12:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:09.411 12:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.411 12:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:09.411 12:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.411 12:10:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:11.321 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1334973 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1334973 ']' 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1334973 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334973 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334973' 00:37:11.322 killing process with pid 1334973 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1334973 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1334973 00:37:11.322 00:37:11.322 real 0m12.141s 00:37:11.322 user 0m49.519s 00:37:11.322 sys 0m1.963s 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:11.322 ************************************ 00:37:11.322 END TEST spdk_target_abort 00:37:11.322 ************************************ 00:37:11.322 12:10:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:11.322 12:10:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:11.322 12:10:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:11.322 12:10:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:11.322 ************************************ 00:37:11.322 START TEST kernel_target_abort 00:37:11.322 ************************************ 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:11.322 12:10:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:14.622 Waiting for block devices as requested 00:37:14.622 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:14.883 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:14.883 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:14.883 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:15.143 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:15.143 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:15.143 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:15.143 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:15.404 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:15.404 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:15.665 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:15.665 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:15.665 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:15.926 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:15.926 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:15.926 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:15.926 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:16.187 No valid GPT data, bailing 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:16.187 00:37:16.187 Discovery Log Number of Records 2, Generation counter 2 00:37:16.187 =====Discovery Log Entry 0====== 00:37:16.187 trtype: tcp 00:37:16.187 adrfam: ipv4 00:37:16.187 subtype: current discovery subsystem 00:37:16.187 treq: not specified, sq flow control disable supported 00:37:16.187 portid: 1 00:37:16.187 trsvcid: 4420 00:37:16.187 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:16.187 traddr: 10.0.0.1 00:37:16.187 eflags: none 00:37:16.187 sectype: none 00:37:16.187 =====Discovery Log Entry 1====== 00:37:16.187 trtype: tcp 00:37:16.187 adrfam: ipv4 00:37:16.187 subtype: nvme subsystem 00:37:16.187 treq: not specified, sq flow control disable supported 00:37:16.187 portid: 1 00:37:16.187 trsvcid: 4420 00:37:16.187 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:16.187 traddr: 10.0.0.1 00:37:16.187 eflags: none 00:37:16.187 sectype: none 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:16.187 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:16.188 12:11:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:19.486 Initializing NVMe Controllers 00:37:19.486 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:19.486 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:19.486 Initialization complete. Launching workers. 00:37:19.486 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68076, failed: 0 00:37:19.486 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68076, failed to submit 0 00:37:19.486 success 0, unsuccessful 68076, failed 0 00:37:19.486 12:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:19.486 12:11:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:22.800 Initializing NVMe Controllers 00:37:22.800 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:22.800 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:22.800 Initialization complete. Launching workers. 00:37:22.800 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 120116, failed: 0 00:37:22.800 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30222, failed to submit 89894 00:37:22.800 success 0, unsuccessful 30222, failed 0 00:37:22.800 12:11:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:22.800 12:11:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:25.344 Initializing NVMe Controllers 00:37:25.344 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:25.344 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:25.344 Initialization complete. Launching workers. 00:37:25.344 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145877, failed: 0 00:37:25.344 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36514, failed to submit 109363 00:37:25.344 success 0, unsuccessful 36514, failed 0 00:37:25.344 12:11:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:25.344 12:11:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:25.344 12:11:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:37:25.605 12:11:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:25.605 12:11:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:25.605 12:11:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:25.605 12:11:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:25.605 12:11:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:25.605 12:11:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:25.605 12:11:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:28.903 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:28.903 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:30.813 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:30.813 00:37:30.813 real 0m19.460s 00:37:30.813 user 0m9.689s 00:37:30.813 sys 0m5.551s 00:37:30.813 12:11:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:30.813 12:11:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:30.813 ************************************ 00:37:30.813 END TEST kernel_target_abort 00:37:30.813 ************************************ 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:30.813 rmmod nvme_tcp 00:37:30.813 rmmod nvme_fabrics 00:37:30.813 rmmod nvme_keyring 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1334973 ']' 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1334973 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1334973 ']' 00:37:30.813 12:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1334973 00:37:30.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1334973) - No such process 00:37:30.814 12:11:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1334973 is not found' 00:37:30.814 Process with pid 1334973 is not found 00:37:30.814 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:37:30.814 12:11:15 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:34.113 Waiting for block devices as requested 00:37:34.113 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:34.374 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:34.374 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:34.374 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:34.635 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:34.635 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:34.635 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:34.895 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:34.895 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:34.895 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:35.155 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:35.155 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:35.155 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:35.416 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:35.416 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:35.416 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:35.677 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:35.677 12:11:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:37.589 12:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:37.589 00:37:37.589 real 0m50.619s 00:37:37.589 user 1m4.448s 00:37:37.589 sys 0m17.920s 00:37:37.589 12:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:37.589 12:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:37.589 ************************************ 00:37:37.589 END TEST nvmf_abort_qd_sizes 00:37:37.589 ************************************ 00:37:37.850 12:11:22 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:37.850 12:11:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:37.850 12:11:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:37.850 12:11:22 -- common/autotest_common.sh@10 -- # set +x 00:37:37.850 ************************************ 00:37:37.850 START TEST keyring_file 00:37:37.850 ************************************ 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:37.850 * Looking for test storage... 00:37:37.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:37.850 12:11:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.850 --rc genhtml_branch_coverage=1 00:37:37.850 --rc genhtml_function_coverage=1 00:37:37.850 --rc genhtml_legend=1 00:37:37.850 --rc geninfo_all_blocks=1 00:37:37.850 --rc geninfo_unexecuted_blocks=1 00:37:37.850 00:37:37.850 ' 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.850 --rc genhtml_branch_coverage=1 00:37:37.850 --rc genhtml_function_coverage=1 00:37:37.850 --rc genhtml_legend=1 00:37:37.850 --rc geninfo_all_blocks=1 00:37:37.850 --rc geninfo_unexecuted_blocks=1 00:37:37.850 00:37:37.850 ' 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.850 --rc genhtml_branch_coverage=1 00:37:37.850 --rc genhtml_function_coverage=1 00:37:37.850 --rc genhtml_legend=1 00:37:37.850 --rc geninfo_all_blocks=1 00:37:37.850 --rc geninfo_unexecuted_blocks=1 00:37:37.850 00:37:37.850 ' 00:37:37.850 12:11:22 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:37.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:37.850 --rc genhtml_branch_coverage=1 00:37:37.850 --rc genhtml_function_coverage=1 00:37:37.850 --rc genhtml_legend=1 00:37:37.850 --rc geninfo_all_blocks=1 00:37:37.850 --rc geninfo_unexecuted_blocks=1 00:37:37.850 00:37:37.850 ' 00:37:37.851 12:11:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:37.851 12:11:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:37.851 12:11:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:38.111 12:11:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:38.111 12:11:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.111 12:11:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.111 12:11:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.111 12:11:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.111 12:11:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.111 12:11:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.111 12:11:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:38.111 12:11:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:38.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:38.111 12:11:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.anxogYXhxn 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.anxogYXhxn 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.anxogYXhxn 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.anxogYXhxn 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.k6dQO4G9Vw 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:38.112 12:11:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.k6dQO4G9Vw 00:37:38.112 12:11:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.k6dQO4G9Vw 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.k6dQO4G9Vw 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=1345059 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1345059 00:37:38.112 12:11:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:38.112 12:11:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1345059 ']' 00:37:38.112 12:11:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:38.112 12:11:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:38.112 12:11:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:38.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:38.112 12:11:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:38.112 12:11:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:38.112 [2024-10-11 12:11:22.669004] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:37:38.112 [2024-10-11 12:11:22.669062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345059 ] 00:37:38.458 [2024-10-11 12:11:22.746954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.458 [2024-10-11 12:11:22.784377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:39.099 12:11:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:39.099 [2024-10-11 12:11:23.468854] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:39.099 null0 00:37:39.099 [2024-10-11 12:11:23.500896] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:39.099 [2024-10-11 12:11:23.501198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.099 12:11:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:39.099 [2024-10-11 12:11:23.532962] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:39.099 request: 00:37:39.099 { 00:37:39.099 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.099 "secure_channel": false, 00:37:39.099 "listen_address": { 00:37:39.099 "trtype": "tcp", 00:37:39.099 "traddr": "127.0.0.1", 00:37:39.099 "trsvcid": "4420" 00:37:39.099 }, 00:37:39.099 "method": "nvmf_subsystem_add_listener", 00:37:39.099 "req_id": 1 00:37:39.099 } 00:37:39.099 Got JSON-RPC error response 00:37:39.099 response: 00:37:39.099 { 00:37:39.099 "code": -32602, 00:37:39.099 "message": "Invalid parameters" 00:37:39.099 } 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:39.099 12:11:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=1345363 00:37:39.099 12:11:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1345363 /var/tmp/bperf.sock 00:37:39.099 12:11:23 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1345363 ']' 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:39.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:39.099 12:11:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:39.099 [2024-10-11 12:11:23.592547] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:37:39.099 [2024-10-11 12:11:23.592602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345363 ] 00:37:39.099 [2024-10-11 12:11:23.672124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.099 [2024-10-11 12:11:23.724804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.040 12:11:24 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:40.040 12:11:24 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:40.040 12:11:24 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.anxogYXhxn 00:37:40.040 12:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.anxogYXhxn 00:37:40.040 12:11:24 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.k6dQO4G9Vw 00:37:40.040 12:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.k6dQO4G9Vw 00:37:40.301 12:11:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:40.301 12:11:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:40.301 12:11:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.301 12:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.301 12:11:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:40.301 12:11:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.anxogYXhxn == \/\t\m\p\/\t\m\p\.\a\n\x\o\g\Y\X\h\x\n ]] 00:37:40.301 12:11:24 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:40.301 12:11:24 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:40.301 12:11:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.301 12:11:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.301 12:11:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:40.561 12:11:25 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.k6dQO4G9Vw == \/\t\m\p\/\t\m\p\.\k\6\d\Q\O\4\G\9\V\w ]] 00:37:40.561 12:11:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:40.561 12:11:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:40.561 12:11:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:40.561 12:11:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.561 12:11:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:40.561 12:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.821 12:11:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:40.821 12:11:25 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:40.821 12:11:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:40.821 12:11:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:40.821 12:11:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.821 12:11:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:40.821 12:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.821 12:11:25 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:40.821 12:11:25 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.821 12:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:41.082 [2024-10-11 12:11:25.570342] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:41.082 nvme0n1 00:37:41.082 12:11:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:41.082 12:11:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:41.082 12:11:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:41.082 12:11:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:41.082 12:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.082 12:11:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:41.344 12:11:25 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:41.344 12:11:25 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:41.344 12:11:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:41.344 12:11:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:41.344 12:11:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:41.344 12:11:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.344 12:11:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:41.608 12:11:26 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:41.608 12:11:26 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:41.608 Running I/O for 1 seconds... 00:37:42.807 17130.00 IOPS, 66.91 MiB/s 00:37:42.807 Latency(us) 00:37:42.808 [2024-10-11T10:11:27.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.808 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:42.808 nvme0n1 : 1.00 17189.76 67.15 0.00 0.00 7432.63 2594.13 18131.63 00:37:42.808 [2024-10-11T10:11:27.440Z] =================================================================================================================== 00:37:42.808 [2024-10-11T10:11:27.440Z] Total : 17189.76 67.15 0.00 0.00 7432.63 2594.13 18131.63 00:37:42.808 { 00:37:42.808 "results": [ 00:37:42.808 { 00:37:42.808 "job": "nvme0n1", 00:37:42.808 "core_mask": "0x2", 00:37:42.808 "workload": "randrw", 00:37:42.808 "percentage": 50, 00:37:42.808 "status": "finished", 00:37:42.808 "queue_depth": 128, 00:37:42.808 "io_size": 4096, 00:37:42.808 "runtime": 1.004028, 00:37:42.808 "iops": 17189.759648137304, 00:37:42.808 "mibps": 67.14749862553634, 00:37:42.808 "io_failed": 0, 00:37:42.808 "io_timeout": 0, 00:37:42.808 "avg_latency_us": 7432.630933426039, 00:37:42.808 "min_latency_us": 2594.133333333333, 00:37:42.808 "max_latency_us": 18131.626666666667 00:37:42.808 } 00:37:42.808 ], 00:37:42.808 "core_count": 1 00:37:42.808 } 00:37:42.808 12:11:27 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:42.808 12:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:42.808 12:11:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:42.808 12:11:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:42.808 12:11:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:42.808 12:11:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:42.808 12:11:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:42.808 12:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.068 12:11:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:43.068 12:11:27 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:43.068 12:11:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:43.068 12:11:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:43.068 12:11:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:43.068 12:11:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.068 12:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.328 12:11:27 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:43.328 12:11:27 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:43.328 12:11:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:43.328 12:11:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:43.328 12:11:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:43.328 12:11:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:43.328 12:11:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:43.328 12:11:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:43.328 12:11:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:43.328 12:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:43.328 [2024-10-11 12:11:27.886131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:43.328 [2024-10-11 12:11:27.886887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x951100 (107): Transport endpoint is not connected 00:37:43.328 [2024-10-11 12:11:27.887882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x951100 (9): Bad file descriptor 00:37:43.328 [2024-10-11 12:11:27.888884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:43.328 [2024-10-11 12:11:27.888894] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:43.328 [2024-10-11 12:11:27.888900] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:43.328 [2024-10-11 12:11:27.888906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:43.328 request: 00:37:43.328 { 00:37:43.328 "name": "nvme0", 00:37:43.328 "trtype": "tcp", 00:37:43.328 "traddr": "127.0.0.1", 00:37:43.328 "adrfam": "ipv4", 00:37:43.328 "trsvcid": "4420", 00:37:43.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:43.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:43.328 "prchk_reftag": false, 00:37:43.328 "prchk_guard": false, 00:37:43.328 "hdgst": false, 00:37:43.328 "ddgst": false, 00:37:43.328 "psk": "key1", 00:37:43.328 "allow_unrecognized_csi": false, 00:37:43.328 "method": "bdev_nvme_attach_controller", 00:37:43.328 "req_id": 1 00:37:43.328 } 00:37:43.328 Got JSON-RPC error response 00:37:43.328 response: 00:37:43.328 { 00:37:43.328 "code": -5, 00:37:43.328 "message": "Input/output error" 00:37:43.328 } 00:37:43.328 12:11:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:43.328 12:11:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:43.329 12:11:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:43.329 12:11:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:43.329 12:11:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:43.329 12:11:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:43.329 12:11:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:43.329 12:11:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.329 12:11:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:43.329 12:11:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.589 12:11:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:43.589 12:11:28 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:43.589 12:11:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:43.589 12:11:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:43.589 12:11:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.589 12:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.589 12:11:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:43.849 12:11:28 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:43.849 12:11:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:43.849 12:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:43.849 12:11:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:43.849 12:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:44.109 12:11:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:44.109 12:11:28 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:44.109 12:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.369 12:11:28 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:44.369 12:11:28 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.anxogYXhxn 00:37:44.369 12:11:28 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.anxogYXhxn 00:37:44.369 12:11:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:44.369 12:11:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.anxogYXhxn 00:37:44.369 12:11:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:44.369 12:11:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:44.369 12:11:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:44.369 12:11:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:44.369 12:11:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.anxogYXhxn 00:37:44.369 12:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.anxogYXhxn 00:37:44.369 [2024-10-11 12:11:28.921567] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.anxogYXhxn': 0100660 00:37:44.369 [2024-10-11 12:11:28.921586] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:44.369 request: 00:37:44.369 { 00:37:44.369 "name": "key0", 00:37:44.369 "path": "/tmp/tmp.anxogYXhxn", 00:37:44.369 "method": "keyring_file_add_key", 00:37:44.369 "req_id": 1 00:37:44.369 } 00:37:44.369 Got JSON-RPC error response 00:37:44.369 response: 00:37:44.369 { 00:37:44.369 "code": -1, 00:37:44.369 "message": "Operation not permitted" 00:37:44.369 } 00:37:44.369 12:11:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:44.369 12:11:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:44.370 12:11:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:44.370 12:11:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:44.370 12:11:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.anxogYXhxn 00:37:44.370 12:11:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.anxogYXhxn 00:37:44.370 12:11:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.anxogYXhxn 00:37:44.629 12:11:29 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.anxogYXhxn 00:37:44.629 12:11:29 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:44.629 12:11:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:44.629 12:11:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:44.629 12:11:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:44.629 12:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.629 12:11:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:44.890 12:11:29 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:44.890 12:11:29 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:44.890 12:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:44.890 [2024-10-11 12:11:29.491014] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.anxogYXhxn': No such file or directory 00:37:44.890 [2024-10-11 12:11:29.491025] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:44.890 [2024-10-11 12:11:29.491037] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:44.890 [2024-10-11 12:11:29.491042] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:44.890 [2024-10-11 12:11:29.491048] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:44.890 [2024-10-11 12:11:29.491053] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:44.890 request: 00:37:44.890 { 00:37:44.890 "name": "nvme0", 00:37:44.890 "trtype": "tcp", 00:37:44.890 "traddr": "127.0.0.1", 00:37:44.890 "adrfam": "ipv4", 00:37:44.890 "trsvcid": "4420", 00:37:44.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:44.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:44.890 "prchk_reftag": false, 00:37:44.890 "prchk_guard": false, 00:37:44.890 "hdgst": false, 00:37:44.890 "ddgst": false, 00:37:44.890 "psk": "key0", 00:37:44.890 "allow_unrecognized_csi": false, 00:37:44.890 "method": "bdev_nvme_attach_controller", 00:37:44.890 "req_id": 1 00:37:44.890 } 00:37:44.890 Got JSON-RPC error response 00:37:44.890 response: 00:37:44.890 { 00:37:44.890 "code": -19, 00:37:44.890 "message": "No such device" 00:37:44.890 } 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:44.890 12:11:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:44.890 12:11:29 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:44.890 12:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:45.150 12:11:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5DeCJmEOpT 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:45.150 12:11:29 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:45.150 12:11:29 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:45.150 12:11:29 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:45.150 12:11:29 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:45.150 12:11:29 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:45.150 12:11:29 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5DeCJmEOpT 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5DeCJmEOpT 00:37:45.150 12:11:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.5DeCJmEOpT 00:37:45.150 12:11:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5DeCJmEOpT 00:37:45.150 12:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5DeCJmEOpT 00:37:45.411 12:11:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:45.411 12:11:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:45.671 nvme0n1 00:37:45.671 12:11:30 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:45.671 12:11:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:45.671 12:11:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:45.671 12:11:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:45.671 12:11:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:45.671 12:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:45.932 12:11:30 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:45.932 12:11:30 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:45.932 12:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:45.932 12:11:30 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:45.932 12:11:30 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:45.932 12:11:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:45.932 12:11:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:45.932 12:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:46.192 12:11:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:46.192 12:11:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:46.192 12:11:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:46.192 12:11:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:46.192 12:11:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:46.192 12:11:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:46.192 12:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:46.452 12:11:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:46.452 12:11:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:46.452 12:11:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:46.452 12:11:31 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:46.452 12:11:31 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:46.452 12:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:46.712 12:11:31 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:46.712 12:11:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5DeCJmEOpT 00:37:46.712 12:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5DeCJmEOpT 00:37:46.974 12:11:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.k6dQO4G9Vw 00:37:46.974 12:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.k6dQO4G9Vw 00:37:46.974 12:11:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:46.974 12:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:47.234 nvme0n1 00:37:47.234 12:11:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:47.234 12:11:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:47.496 12:11:32 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:47.496 "subsystems": [ 00:37:47.496 { 00:37:47.496 "subsystem": "keyring", 00:37:47.496 "config": [ 00:37:47.496 { 00:37:47.496 "method": "keyring_file_add_key", 00:37:47.496 "params": { 00:37:47.496 "name": "key0", 00:37:47.496 "path": "/tmp/tmp.5DeCJmEOpT" 00:37:47.496 } 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "method": "keyring_file_add_key", 00:37:47.496 "params": { 00:37:47.496 "name": "key1", 00:37:47.496 "path": "/tmp/tmp.k6dQO4G9Vw" 00:37:47.496 } 00:37:47.496 } 00:37:47.496 ] 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "subsystem": "iobuf", 00:37:47.496 "config": [ 00:37:47.496 { 00:37:47.496 "method": "iobuf_set_options", 00:37:47.496 "params": { 00:37:47.496 "small_pool_count": 8192, 00:37:47.496 "large_pool_count": 1024, 00:37:47.496 "small_bufsize": 8192, 00:37:47.496 "large_bufsize": 135168 00:37:47.496 } 00:37:47.496 } 00:37:47.496 ] 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "subsystem": "sock", 00:37:47.496 "config": [ 00:37:47.496 { 00:37:47.496 "method": "sock_set_default_impl", 00:37:47.496 "params": { 00:37:47.496 "impl_name": "posix" 00:37:47.496 } 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "method": "sock_impl_set_options", 00:37:47.496 "params": { 00:37:47.496 "impl_name": "ssl", 00:37:47.496 "recv_buf_size": 4096, 00:37:47.496 "send_buf_size": 4096, 00:37:47.496 "enable_recv_pipe": true, 00:37:47.496 "enable_quickack": false, 00:37:47.496 "enable_placement_id": 0, 00:37:47.496 "enable_zerocopy_send_server": true, 00:37:47.496 "enable_zerocopy_send_client": false, 00:37:47.496 "zerocopy_threshold": 0, 00:37:47.496 "tls_version": 0, 00:37:47.496 "enable_ktls": false 00:37:47.496 } 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "method": "sock_impl_set_options", 00:37:47.496 "params": { 00:37:47.496 "impl_name": "posix", 00:37:47.496 "recv_buf_size": 2097152, 00:37:47.496 "send_buf_size": 2097152, 00:37:47.496 "enable_recv_pipe": true, 00:37:47.496 "enable_quickack": false, 00:37:47.496 "enable_placement_id": 0, 00:37:47.496 "enable_zerocopy_send_server": true, 00:37:47.496 "enable_zerocopy_send_client": false, 00:37:47.496 "zerocopy_threshold": 0, 00:37:47.496 "tls_version": 0, 00:37:47.496 "enable_ktls": false 00:37:47.496 } 00:37:47.496 } 00:37:47.496 ] 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "subsystem": "vmd", 00:37:47.496 "config": [] 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "subsystem": "accel", 00:37:47.496 "config": [ 00:37:47.496 { 00:37:47.496 "method": "accel_set_options", 00:37:47.496 "params": { 00:37:47.496 "small_cache_size": 128, 00:37:47.496 "large_cache_size": 16, 00:37:47.496 "task_count": 2048, 00:37:47.496 "sequence_count": 2048, 00:37:47.496 "buf_count": 2048 00:37:47.496 } 00:37:47.496 } 00:37:47.496 ] 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "subsystem": "bdev", 00:37:47.496 "config": [ 00:37:47.496 { 00:37:47.496 "method": "bdev_set_options", 00:37:47.496 "params": { 00:37:47.496 "bdev_io_pool_size": 65535, 00:37:47.496 "bdev_io_cache_size": 256, 00:37:47.496 "bdev_auto_examine": true, 00:37:47.496 "iobuf_small_cache_size": 128, 00:37:47.496 "iobuf_large_cache_size": 16 00:37:47.496 } 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "method": "bdev_raid_set_options", 00:37:47.496 "params": { 00:37:47.496 "process_window_size_kb": 1024, 00:37:47.496 "process_max_bandwidth_mb_sec": 0 00:37:47.496 } 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "method": "bdev_iscsi_set_options", 00:37:47.496 "params": { 00:37:47.496 "timeout_sec": 30 00:37:47.496 } 00:37:47.496 }, 00:37:47.496 { 00:37:47.496 "method": "bdev_nvme_set_options", 00:37:47.496 "params": { 00:37:47.496 "action_on_timeout": "none", 00:37:47.496 "timeout_us": 0, 00:37:47.496 "timeout_admin_us": 0, 00:37:47.496 "keep_alive_timeout_ms": 10000, 00:37:47.496 "arbitration_burst": 0, 00:37:47.496 "low_priority_weight": 0, 00:37:47.496 "medium_priority_weight": 0, 00:37:47.496 "high_priority_weight": 0, 00:37:47.496 "nvme_adminq_poll_period_us": 10000, 00:37:47.496 "nvme_ioq_poll_period_us": 0, 00:37:47.496 "io_queue_requests": 512, 00:37:47.496 "delay_cmd_submit": true, 00:37:47.496 "transport_retry_count": 4, 00:37:47.496 "bdev_retry_count": 3, 00:37:47.496 "transport_ack_timeout": 0, 00:37:47.496 "ctrlr_loss_timeout_sec": 0, 00:37:47.496 "reconnect_delay_sec": 0, 00:37:47.496 "fast_io_fail_timeout_sec": 0, 00:37:47.496 "disable_auto_failback": false, 00:37:47.496 "generate_uuids": false, 00:37:47.496 "transport_tos": 0, 00:37:47.496 "nvme_error_stat": false, 00:37:47.496 "rdma_srq_size": 0, 00:37:47.497 "io_path_stat": false, 00:37:47.497 "allow_accel_sequence": false, 00:37:47.497 "rdma_max_cq_size": 0, 00:37:47.497 "rdma_cm_event_timeout_ms": 0, 00:37:47.497 "dhchap_digests": [ 00:37:47.497 "sha256", 00:37:47.497 "sha384", 00:37:47.497 "sha512" 00:37:47.497 ], 00:37:47.497 "dhchap_dhgroups": [ 00:37:47.497 "null", 00:37:47.497 "ffdhe2048", 00:37:47.497 "ffdhe3072", 00:37:47.497 "ffdhe4096", 00:37:47.497 "ffdhe6144", 00:37:47.497 "ffdhe8192" 00:37:47.497 ] 00:37:47.497 } 00:37:47.497 }, 00:37:47.497 { 00:37:47.497 "method": "bdev_nvme_attach_controller", 00:37:47.497 "params": { 00:37:47.497 "name": "nvme0", 00:37:47.497 "trtype": "TCP", 00:37:47.497 "adrfam": "IPv4", 00:37:47.497 "traddr": "127.0.0.1", 00:37:47.497 "trsvcid": "4420", 00:37:47.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:47.497 "prchk_reftag": false, 00:37:47.497 "prchk_guard": false, 00:37:47.497 "ctrlr_loss_timeout_sec": 0, 00:37:47.497 "reconnect_delay_sec": 0, 00:37:47.497 "fast_io_fail_timeout_sec": 0, 00:37:47.497 "psk": "key0", 00:37:47.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:47.497 "hdgst": false, 00:37:47.497 "ddgst": false, 00:37:47.497 "multipath": "multipath" 00:37:47.497 } 00:37:47.497 }, 00:37:47.497 { 00:37:47.497 "method": "bdev_nvme_set_hotplug", 00:37:47.497 "params": { 00:37:47.497 "period_us": 100000, 00:37:47.497 "enable": false 00:37:47.497 } 00:37:47.497 }, 00:37:47.497 { 00:37:47.497 "method": "bdev_wait_for_examine" 00:37:47.497 } 00:37:47.497 ] 00:37:47.497 }, 00:37:47.497 { 00:37:47.497 "subsystem": "nbd", 00:37:47.497 "config": [] 00:37:47.497 } 00:37:47.497 ] 00:37:47.497 }' 00:37:47.497 12:11:32 keyring_file -- keyring/file.sh@115 -- # killprocess 1345363 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1345363 ']' 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1345363 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1345363 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1345363' 00:37:47.497 killing process with pid 1345363 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@969 -- # kill 1345363 00:37:47.497 Received shutdown signal, test time was about 1.000000 seconds 00:37:47.497 00:37:47.497 Latency(us) 00:37:47.497 [2024-10-11T10:11:32.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.497 [2024-10-11T10:11:32.129Z] =================================================================================================================== 00:37:47.497 [2024-10-11T10:11:32.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:47.497 12:11:32 keyring_file -- common/autotest_common.sh@974 -- # wait 1345363 00:37:47.759 12:11:32 keyring_file -- keyring/file.sh@118 -- # bperfpid=1347104 00:37:47.759 12:11:32 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1347104 /var/tmp/bperf.sock 00:37:47.759 12:11:32 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1347104 ']' 00:37:47.759 12:11:32 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:47.759 12:11:32 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:47.759 12:11:32 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:47.759 12:11:32 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:47.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:47.759 12:11:32 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:47.759 12:11:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:47.759 12:11:32 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:47.759 "subsystems": [ 00:37:47.759 { 00:37:47.759 "subsystem": "keyring", 00:37:47.759 "config": [ 00:37:47.759 { 00:37:47.759 "method": "keyring_file_add_key", 00:37:47.759 "params": { 00:37:47.759 "name": "key0", 00:37:47.759 "path": "/tmp/tmp.5DeCJmEOpT" 00:37:47.759 } 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "method": "keyring_file_add_key", 00:37:47.759 "params": { 00:37:47.759 "name": "key1", 00:37:47.759 "path": "/tmp/tmp.k6dQO4G9Vw" 00:37:47.759 } 00:37:47.759 } 00:37:47.759 ] 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "subsystem": "iobuf", 00:37:47.759 "config": [ 00:37:47.759 { 00:37:47.759 "method": "iobuf_set_options", 00:37:47.759 "params": { 00:37:47.759 "small_pool_count": 8192, 00:37:47.759 "large_pool_count": 1024, 00:37:47.759 "small_bufsize": 8192, 00:37:47.759 "large_bufsize": 135168 00:37:47.759 } 00:37:47.759 } 00:37:47.759 ] 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "subsystem": "sock", 00:37:47.759 "config": [ 00:37:47.759 { 00:37:47.759 "method": "sock_set_default_impl", 00:37:47.759 "params": { 00:37:47.759 "impl_name": "posix" 00:37:47.759 } 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "method": "sock_impl_set_options", 00:37:47.759 "params": { 00:37:47.759 "impl_name": "ssl", 00:37:47.759 "recv_buf_size": 4096, 00:37:47.759 "send_buf_size": 4096, 00:37:47.759 "enable_recv_pipe": true, 00:37:47.759 "enable_quickack": false, 00:37:47.759 "enable_placement_id": 0, 00:37:47.759 "enable_zerocopy_send_server": true, 00:37:47.759 "enable_zerocopy_send_client": false, 00:37:47.759 "zerocopy_threshold": 0, 00:37:47.759 "tls_version": 0, 00:37:47.759 "enable_ktls": false 00:37:47.759 } 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "method": "sock_impl_set_options", 00:37:47.759 "params": { 00:37:47.759 "impl_name": "posix", 00:37:47.759 "recv_buf_size": 2097152, 00:37:47.759 "send_buf_size": 2097152, 00:37:47.759 "enable_recv_pipe": true, 00:37:47.759 "enable_quickack": false, 00:37:47.759 "enable_placement_id": 0, 00:37:47.759 "enable_zerocopy_send_server": true, 00:37:47.759 "enable_zerocopy_send_client": false, 00:37:47.759 "zerocopy_threshold": 0, 00:37:47.759 "tls_version": 0, 00:37:47.759 "enable_ktls": false 00:37:47.759 } 00:37:47.759 } 00:37:47.759 ] 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "subsystem": "vmd", 00:37:47.759 "config": [] 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "subsystem": "accel", 00:37:47.759 "config": [ 00:37:47.759 { 00:37:47.759 "method": "accel_set_options", 00:37:47.759 "params": { 00:37:47.759 "small_cache_size": 128, 00:37:47.759 "large_cache_size": 16, 00:37:47.759 "task_count": 2048, 00:37:47.759 "sequence_count": 2048, 00:37:47.759 "buf_count": 2048 00:37:47.759 } 00:37:47.759 } 00:37:47.759 ] 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "subsystem": "bdev", 00:37:47.759 "config": [ 00:37:47.759 { 00:37:47.759 "method": "bdev_set_options", 00:37:47.759 "params": { 00:37:47.759 "bdev_io_pool_size": 65535, 00:37:47.759 "bdev_io_cache_size": 256, 00:37:47.759 "bdev_auto_examine": true, 00:37:47.759 "iobuf_small_cache_size": 128, 00:37:47.759 "iobuf_large_cache_size": 16 00:37:47.759 } 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "method": "bdev_raid_set_options", 00:37:47.759 "params": { 00:37:47.759 "process_window_size_kb": 1024, 00:37:47.759 "process_max_bandwidth_mb_sec": 0 00:37:47.759 } 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "method": "bdev_iscsi_set_options", 00:37:47.759 "params": { 00:37:47.759 "timeout_sec": 30 00:37:47.759 } 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "method": "bdev_nvme_set_options", 00:37:47.759 "params": { 00:37:47.759 "action_on_timeout": "none", 00:37:47.759 "timeout_us": 0, 00:37:47.759 "timeout_admin_us": 0, 00:37:47.759 "keep_alive_timeout_ms": 10000, 00:37:47.759 "arbitration_burst": 0, 00:37:47.759 "low_priority_weight": 0, 00:37:47.759 "medium_priority_weight": 0, 00:37:47.759 "high_priority_weight": 0, 00:37:47.759 "nvme_adminq_poll_period_us": 10000, 00:37:47.759 "nvme_ioq_poll_period_us": 0, 00:37:47.759 "io_queue_requests": 512, 00:37:47.759 "delay_cmd_submit": true, 00:37:47.759 "transport_retry_count": 4, 00:37:47.759 "bdev_retry_count": 3, 00:37:47.759 "transport_ack_timeout": 0, 00:37:47.759 "ctrlr_loss_timeout_sec": 0, 00:37:47.759 "reconnect_delay_sec": 0, 00:37:47.759 "fast_io_fail_timeout_sec": 0, 00:37:47.759 "disable_auto_failback": false, 00:37:47.759 "generate_uuids": false, 00:37:47.759 "transport_tos": 0, 00:37:47.759 "nvme_error_stat": false, 00:37:47.759 "rdma_srq_size": 0, 00:37:47.759 "io_path_stat": false, 00:37:47.759 "allow_accel_sequence": false, 00:37:47.759 "rdma_max_cq_size": 0, 00:37:47.759 "rdma_cm_event_timeout_ms": 0, 00:37:47.759 "dhchap_digests": [ 00:37:47.759 "sha256", 00:37:47.759 "sha384", 00:37:47.759 "sha512" 00:37:47.759 ], 00:37:47.759 "dhchap_dhgroups": [ 00:37:47.759 "null", 00:37:47.759 "ffdhe2048", 00:37:47.759 "ffdhe3072", 00:37:47.759 "ffdhe4096", 00:37:47.759 "ffdhe6144", 00:37:47.759 "ffdhe8192" 00:37:47.759 ] 00:37:47.759 } 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "method": "bdev_nvme_attach_controller", 00:37:47.759 "params": { 00:37:47.759 "name": "nvme0", 00:37:47.759 "trtype": "TCP", 00:37:47.759 "adrfam": "IPv4", 00:37:47.759 "traddr": "127.0.0.1", 00:37:47.759 "trsvcid": "4420", 00:37:47.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:47.759 "prchk_reftag": false, 00:37:47.759 "prchk_guard": false, 00:37:47.759 "ctrlr_loss_timeout_sec": 0, 00:37:47.759 "reconnect_delay_sec": 0, 00:37:47.759 "fast_io_fail_timeout_sec": 0, 00:37:47.759 "psk": "key0", 00:37:47.759 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:47.759 "hdgst": false, 00:37:47.759 "ddgst": false, 00:37:47.759 "multipath": "multipath" 00:37:47.759 } 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "method": "bdev_nvme_set_hotplug", 00:37:47.759 "params": { 00:37:47.759 "period_us": 100000, 00:37:47.759 "enable": false 00:37:47.759 } 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "method": "bdev_wait_for_examine" 00:37:47.759 } 00:37:47.759 ] 00:37:47.759 }, 00:37:47.759 { 00:37:47.759 "subsystem": "nbd", 00:37:47.759 "config": [] 00:37:47.759 } 00:37:47.759 ] 00:37:47.759 }' 00:37:47.759 [2024-10-11 12:11:32.278708] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:37:47.760 [2024-10-11 12:11:32.278781] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347104 ] 00:37:47.760 [2024-10-11 12:11:32.354630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.760 [2024-10-11 12:11:32.384307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.020 [2024-10-11 12:11:32.526854] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:48.591 12:11:33 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:48.591 12:11:33 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:48.591 12:11:33 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:48.591 12:11:33 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:48.591 12:11:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:48.852 12:11:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:48.852 12:11:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:48.852 12:11:33 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:48.852 12:11:33 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:48.852 12:11:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:49.113 12:11:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:49.113 12:11:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:49.113 12:11:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:49.113 12:11:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:49.374 12:11:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:49.374 12:11:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:49.374 12:11:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5DeCJmEOpT /tmp/tmp.k6dQO4G9Vw 00:37:49.374 12:11:33 keyring_file -- keyring/file.sh@20 -- # killprocess 1347104 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1347104 ']' 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1347104 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1347104 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1347104' 00:37:49.374 killing process with pid 1347104 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@969 -- # kill 1347104 00:37:49.374 Received shutdown signal, test time was about 1.000000 seconds 00:37:49.374 00:37:49.374 Latency(us) 00:37:49.374 [2024-10-11T10:11:34.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:49.374 [2024-10-11T10:11:34.006Z] =================================================================================================================== 00:37:49.374 [2024-10-11T10:11:34.006Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@974 -- # wait 1347104 00:37:49.374 12:11:33 keyring_file -- keyring/file.sh@21 -- # killprocess 1345059 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1345059 ']' 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1345059 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:49.374 12:11:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1345059 00:37:49.634 12:11:34 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:49.635 12:11:34 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:49.635 12:11:34 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1345059' 00:37:49.635 killing process with pid 1345059 00:37:49.635 12:11:34 keyring_file -- common/autotest_common.sh@969 -- # kill 1345059 00:37:49.635 12:11:34 keyring_file -- common/autotest_common.sh@974 -- # wait 1345059 00:37:49.635 00:37:49.635 real 0m11.956s 00:37:49.635 user 0m28.926s 00:37:49.635 sys 0m2.638s 00:37:49.635 12:11:34 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:49.635 12:11:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:49.635 ************************************ 00:37:49.635 END TEST keyring_file 00:37:49.635 ************************************ 00:37:49.895 12:11:34 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:49.895 12:11:34 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:49.895 12:11:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:49.895 12:11:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:49.895 12:11:34 -- common/autotest_common.sh@10 -- # set +x 00:37:49.895 ************************************ 00:37:49.895 START TEST keyring_linux 00:37:49.895 ************************************ 00:37:49.895 12:11:34 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:49.895 Joined session keyring: 745292954 00:37:49.895 * Looking for test storage... 00:37:49.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:49.895 12:11:34 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:49.895 12:11:34 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:37:49.895 12:11:34 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:49.895 12:11:34 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:49.895 12:11:34 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.896 12:11:34 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:49.896 12:11:34 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:49.896 12:11:34 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.896 12:11:34 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:49.896 12:11:34 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.896 12:11:34 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.896 12:11:34 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.896 12:11:34 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:49.896 12:11:34 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.896 12:11:34 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:49.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.896 --rc genhtml_branch_coverage=1 00:37:49.896 --rc genhtml_function_coverage=1 00:37:49.896 --rc genhtml_legend=1 00:37:49.896 --rc geninfo_all_blocks=1 00:37:49.896 --rc geninfo_unexecuted_blocks=1 00:37:49.896 00:37:49.896 ' 00:37:49.896 12:11:34 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:49.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.896 --rc genhtml_branch_coverage=1 00:37:49.896 --rc genhtml_function_coverage=1 00:37:49.896 --rc genhtml_legend=1 00:37:49.896 --rc geninfo_all_blocks=1 00:37:49.896 --rc geninfo_unexecuted_blocks=1 00:37:49.896 00:37:49.896 ' 00:37:49.896 12:11:34 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:49.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.896 --rc genhtml_branch_coverage=1 00:37:49.896 --rc genhtml_function_coverage=1 00:37:49.896 --rc genhtml_legend=1 00:37:49.896 --rc geninfo_all_blocks=1 00:37:49.896 --rc geninfo_unexecuted_blocks=1 00:37:49.896 00:37:49.896 ' 00:37:49.896 12:11:34 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:49.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.896 --rc genhtml_branch_coverage=1 00:37:49.896 --rc genhtml_function_coverage=1 00:37:49.896 --rc genhtml_legend=1 00:37:49.896 --rc geninfo_all_blocks=1 00:37:49.896 --rc geninfo_unexecuted_blocks=1 00:37:49.896 00:37:49.896 ' 00:37:49.896 12:11:34 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:49.896 12:11:34 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.896 12:11:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:50.156 12:11:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.156 12:11:34 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:50.156 12:11:34 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:50.156 12:11:34 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.156 12:11:34 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.156 12:11:34 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.156 12:11:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.156 12:11:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.156 12:11:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.156 12:11:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:50.156 12:11:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:50.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:50.157 /tmp/:spdk-test:key0 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:50.157 12:11:34 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:50.157 12:11:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:50.157 /tmp/:spdk-test:key1 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1347621 00:37:50.157 12:11:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1347621 00:37:50.157 12:11:34 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1347621 ']' 00:37:50.157 12:11:34 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.157 12:11:34 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:50.157 12:11:34 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.157 12:11:34 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:50.157 12:11:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:50.157 [2024-10-11 12:11:34.673972] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:37:50.157 [2024-10-11 12:11:34.674026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347621 ] 00:37:50.157 [2024-10-11 12:11:34.747367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.157 [2024-10-11 12:11:34.777409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.417 12:11:34 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:50.417 12:11:34 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:50.417 12:11:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:50.417 12:11:34 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:50.417 12:11:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:50.417 [2024-10-11 12:11:34.957349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.417 null0 00:37:50.417 [2024-10-11 12:11:34.989403] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:50.417 [2024-10-11 12:11:34.989763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:50.417 12:11:35 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:50.417 12:11:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:50.417 660393797 00:37:50.417 12:11:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:50.417 976530040 00:37:50.417 12:11:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1347626 00:37:50.417 12:11:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1347626 /var/tmp/bperf.sock 00:37:50.417 12:11:35 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:50.417 12:11:35 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1347626 ']' 00:37:50.417 12:11:35 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:50.417 12:11:35 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:50.417 12:11:35 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:50.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:50.417 12:11:35 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:50.417 12:11:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:50.677 [2024-10-11 12:11:35.067575] Starting SPDK v25.01-pre git sha1 5031f0f3b / DPDK 24.07.0 initialization... 00:37:50.677 [2024-10-11 12:11:35.067622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347626 ] 00:37:50.677 [2024-10-11 12:11:35.142738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.677 [2024-10-11 12:11:35.172497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.249 12:11:35 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:51.249 12:11:35 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:51.249 12:11:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:51.249 12:11:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:51.509 12:11:36 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:51.509 12:11:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:51.768 12:11:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:51.768 12:11:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:51.768 [2024-10-11 12:11:36.392397] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:52.029 nvme0n1 00:37:52.029 12:11:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:52.029 12:11:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:52.029 12:11:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:52.029 12:11:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:52.029 12:11:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.029 12:11:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:52.289 12:11:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:52.289 12:11:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:52.289 12:11:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@25 -- # sn=660393797 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 660393797 == \6\6\0\3\9\3\7\9\7 ]] 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 660393797 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:52.289 12:11:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:52.549 Running I/O for 1 seconds... 00:37:53.490 24320.00 IOPS, 95.00 MiB/s 00:37:53.490 Latency(us) 00:37:53.490 [2024-10-11T10:11:38.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.490 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:53.490 nvme0n1 : 1.01 24321.75 95.01 0.00 0.00 5247.24 4014.08 10267.31 00:37:53.490 [2024-10-11T10:11:38.122Z] =================================================================================================================== 00:37:53.490 [2024-10-11T10:11:38.122Z] Total : 24321.75 95.01 0.00 0.00 5247.24 4014.08 10267.31 00:37:53.490 { 00:37:53.490 "results": [ 00:37:53.490 { 00:37:53.490 "job": "nvme0n1", 00:37:53.490 "core_mask": "0x2", 00:37:53.490 "workload": "randread", 00:37:53.490 "status": "finished", 00:37:53.490 "queue_depth": 128, 00:37:53.490 "io_size": 4096, 00:37:53.490 "runtime": 1.005232, 00:37:53.490 "iops": 24321.748611265855, 00:37:53.490 "mibps": 95.00683051275725, 00:37:53.490 "io_failed": 0, 00:37:53.490 "io_timeout": 0, 00:37:53.491 "avg_latency_us": 5247.2386349816625, 00:37:53.491 "min_latency_us": 4014.08, 00:37:53.491 "max_latency_us": 10267.306666666667 00:37:53.491 } 00:37:53.491 ], 00:37:53.491 "core_count": 1 00:37:53.491 } 00:37:53.491 12:11:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:53.491 12:11:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:53.750 12:11:38 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:53.750 12:11:38 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:53.750 12:11:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:53.750 12:11:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:53.750 12:11:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:53.750 12:11:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:53.750 12:11:38 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:53.750 12:11:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:53.750 12:11:38 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:53.750 12:11:38 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:53.750 12:11:38 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:37:53.750 12:11:38 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:53.750 12:11:38 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:53.750 12:11:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:53.750 12:11:38 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:53.750 12:11:38 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:53.750 12:11:38 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:53.750 12:11:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:54.011 [2024-10-11 12:11:38.498665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:54.011 [2024-10-11 12:11:38.498836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49eb0 (107): Transport endpoint is not connected 00:37:54.011 [2024-10-11 12:11:38.499833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa49eb0 (9): Bad file descriptor 00:37:54.011 [2024-10-11 12:11:38.500834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:54.011 [2024-10-11 12:11:38.500841] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:54.011 [2024-10-11 12:11:38.500846] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:54.011 [2024-10-11 12:11:38.500852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:54.011 request: 00:37:54.011 { 00:37:54.011 "name": "nvme0", 00:37:54.011 "trtype": "tcp", 00:37:54.011 "traddr": "127.0.0.1", 00:37:54.011 "adrfam": "ipv4", 00:37:54.011 "trsvcid": "4420", 00:37:54.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:54.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:54.011 "prchk_reftag": false, 00:37:54.011 "prchk_guard": false, 00:37:54.011 "hdgst": false, 00:37:54.011 "ddgst": false, 00:37:54.011 "psk": ":spdk-test:key1", 00:37:54.011 "allow_unrecognized_csi": false, 00:37:54.011 "method": "bdev_nvme_attach_controller", 00:37:54.011 "req_id": 1 00:37:54.011 } 00:37:54.011 Got JSON-RPC error response 00:37:54.011 response: 00:37:54.011 { 00:37:54.011 "code": -5, 00:37:54.011 "message": "Input/output error" 00:37:54.011 } 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@33 -- # sn=660393797 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 660393797 00:37:54.011 1 links removed 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@33 -- # sn=976530040 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 976530040 00:37:54.011 1 links removed 00:37:54.011 12:11:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1347626 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1347626 ']' 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1347626 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1347626 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1347626' 00:37:54.011 killing process with pid 1347626 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@969 -- # kill 1347626 00:37:54.011 Received shutdown signal, test time was about 1.000000 seconds 00:37:54.011 00:37:54.011 Latency(us) 00:37:54.011 [2024-10-11T10:11:38.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:54.011 [2024-10-11T10:11:38.643Z] =================================================================================================================== 00:37:54.011 [2024-10-11T10:11:38.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:54.011 12:11:38 keyring_linux -- common/autotest_common.sh@974 -- # wait 1347626 00:37:54.272 12:11:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1347621 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1347621 ']' 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1347621 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1347621 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1347621' 00:37:54.272 killing process with pid 1347621 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@969 -- # kill 1347621 00:37:54.272 12:11:38 keyring_linux -- common/autotest_common.sh@974 -- # wait 1347621 00:37:54.533 00:37:54.533 real 0m4.662s 00:37:54.533 user 0m9.066s 00:37:54.533 sys 0m1.353s 00:37:54.533 12:11:38 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:54.533 12:11:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:54.533 ************************************ 00:37:54.533 END TEST keyring_linux 00:37:54.533 ************************************ 00:37:54.533 12:11:39 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:54.533 12:11:39 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:37:54.533 12:11:39 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:54.533 12:11:39 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:54.533 12:11:39 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:37:54.533 12:11:39 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:37:54.533 12:11:39 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:37:54.533 12:11:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:54.533 12:11:39 -- common/autotest_common.sh@10 -- # set +x 00:37:54.533 12:11:39 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:37:54.533 12:11:39 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:54.533 12:11:39 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:54.533 12:11:39 -- common/autotest_common.sh@10 -- # set +x 00:38:02.667 INFO: APP EXITING 00:38:02.667 INFO: killing all VMs 00:38:02.667 INFO: killing vhost app 00:38:02.667 INFO: EXIT DONE 00:38:05.970 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:05.970 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:05.970 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:09.270 Cleaning 00:38:09.270 Removing: /var/run/dpdk/spdk0/config 00:38:09.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:09.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:09.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:09.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:09.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:09.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:09.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:09.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:09.270 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:09.531 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:09.531 Removing: /var/run/dpdk/spdk1/config 00:38:09.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:09.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:09.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:09.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:09.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:09.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:09.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:09.531 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:09.531 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:09.531 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:09.531 Removing: /var/run/dpdk/spdk2/config 00:38:09.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:09.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:09.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:09.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:09.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:09.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:09.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:09.531 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:09.531 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:09.531 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:09.531 Removing: /var/run/dpdk/spdk3/config 00:38:09.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:09.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:09.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:09.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:09.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:09.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:09.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:09.531 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:09.531 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:09.531 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:09.531 Removing: /var/run/dpdk/spdk4/config 00:38:09.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:09.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:09.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:09.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:09.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:09.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:09.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:09.531 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:09.531 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:09.531 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:09.531 Removing: /dev/shm/bdev_svc_trace.1 00:38:09.531 Removing: /dev/shm/nvmf_trace.0 00:38:09.531 Removing: /dev/shm/spdk_tgt_trace.pid780384 00:38:09.531 Removing: /var/run/dpdk/spdk0 00:38:09.531 Removing: /var/run/dpdk/spdk1 00:38:09.531 Removing: /var/run/dpdk/spdk2 00:38:09.531 Removing: /var/run/dpdk/spdk3 00:38:09.531 Removing: /var/run/dpdk/spdk4 00:38:09.531 Removing: /var/run/dpdk/spdk_pid1025960 00:38:09.531 Removing: /var/run/dpdk/spdk_pid1031328 00:38:09.531 Removing: /var/run/dpdk/spdk_pid1033243 00:38:09.531 Removing: /var/run/dpdk/spdk_pid1035534 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1035724 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1035890 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1036229 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1036957 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1039182 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1040393 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1040808 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1043587 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1044765 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1045463 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1050528 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1057230 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1057231 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1057232 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1061916 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1072163 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1076991 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1084209 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1085705 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1087298 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1089073 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1094894 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1100338 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1109541 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1109543 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1114625 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1114930 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1115261 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1115616 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1115660 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1121320 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1121942 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1127328 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1130675 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1137068 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1143605 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1154434 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1162767 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1162775 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1185058 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1185893 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1186605 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1187245 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1188233 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1188837 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1189387 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1190219 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1195432 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1195773 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1202913 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1203290 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1210215 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1215252 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1226874 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1227547 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1232603 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1232957 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1237994 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1244712 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1247780 00:38:09.791 Removing: /var/run/dpdk/spdk_pid1260415 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1270870 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1272869 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1273882 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1293489 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1298195 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1301385 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1309607 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1309633 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1315594 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1317818 00:38:10.051 Removing: /var/run/dpdk/spdk_pid1320306 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1321494 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1324013 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1325294 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1335169 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1335835 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1336497 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1339245 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1339806 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1340474 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1345059 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1345363 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1347104 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1347621 00:38:10.052 Removing: /var/run/dpdk/spdk_pid1347626 00:38:10.052 Removing: /var/run/dpdk/spdk_pid778690 00:38:10.052 Removing: /var/run/dpdk/spdk_pid780384 00:38:10.052 Removing: /var/run/dpdk/spdk_pid781013 00:38:10.052 Removing: /var/run/dpdk/spdk_pid782093 00:38:10.052 Removing: /var/run/dpdk/spdk_pid782387 00:38:10.052 Removing: /var/run/dpdk/spdk_pid783558 00:38:10.052 Removing: /var/run/dpdk/spdk_pid783793 00:38:10.052 Removing: /var/run/dpdk/spdk_pid784154 00:38:10.052 Removing: /var/run/dpdk/spdk_pid785155 00:38:10.052 Removing: /var/run/dpdk/spdk_pid785854 00:38:10.052 Removing: /var/run/dpdk/spdk_pid786239 00:38:10.052 Removing: /var/run/dpdk/spdk_pid786645 00:38:10.052 Removing: /var/run/dpdk/spdk_pid787054 00:38:10.052 Removing: /var/run/dpdk/spdk_pid787449 00:38:10.052 Removing: /var/run/dpdk/spdk_pid787571 00:38:10.052 Removing: /var/run/dpdk/spdk_pid787845 00:38:10.052 Removing: /var/run/dpdk/spdk_pid788233 00:38:10.052 Removing: /var/run/dpdk/spdk_pid789430 00:38:10.052 Removing: /var/run/dpdk/spdk_pid792889 00:38:10.052 Removing: /var/run/dpdk/spdk_pid793249 00:38:10.052 Removing: /var/run/dpdk/spdk_pid793620 00:38:10.052 Removing: /var/run/dpdk/spdk_pid793634 00:38:10.052 Removing: /var/run/dpdk/spdk_pid794167 00:38:10.052 Removing: /var/run/dpdk/spdk_pid794410 00:38:10.052 Removing: /var/run/dpdk/spdk_pid794833 00:38:10.052 Removing: /var/run/dpdk/spdk_pid795158 00:38:10.052 Removing: /var/run/dpdk/spdk_pid795357 00:38:10.052 Removing: /var/run/dpdk/spdk_pid795538 00:38:10.052 Removing: /var/run/dpdk/spdk_pid795798 00:38:10.052 Removing: /var/run/dpdk/spdk_pid795907 00:38:10.052 Removing: /var/run/dpdk/spdk_pid796744 00:38:10.052 Removing: /var/run/dpdk/spdk_pid797187 00:38:10.052 Removing: /var/run/dpdk/spdk_pid797587 00:38:10.052 Removing: /var/run/dpdk/spdk_pid802180 00:38:10.052 Removing: /var/run/dpdk/spdk_pid807503 00:38:10.312 Removing: /var/run/dpdk/spdk_pid819588 00:38:10.312 Removing: /var/run/dpdk/spdk_pid820270 00:38:10.312 Removing: /var/run/dpdk/spdk_pid825576 00:38:10.312 Removing: /var/run/dpdk/spdk_pid826011 00:38:10.312 Removing: /var/run/dpdk/spdk_pid831104 00:38:10.312 Removing: /var/run/dpdk/spdk_pid838198 00:38:10.312 Removing: /var/run/dpdk/spdk_pid841473 00:38:10.312 Removing: /var/run/dpdk/spdk_pid854708 00:38:10.312 Removing: /var/run/dpdk/spdk_pid865602 00:38:10.312 Removing: /var/run/dpdk/spdk_pid867772 00:38:10.312 Removing: /var/run/dpdk/spdk_pid868788 00:38:10.312 Removing: /var/run/dpdk/spdk_pid889787 00:38:10.312 Removing: /var/run/dpdk/spdk_pid894544 00:38:10.312 Removing: /var/run/dpdk/spdk_pid950192 00:38:10.312 Removing: /var/run/dpdk/spdk_pid956724 00:38:10.312 Removing: /var/run/dpdk/spdk_pid964201 00:38:10.312 Removing: /var/run/dpdk/spdk_pid971545 00:38:10.312 Removing: /var/run/dpdk/spdk_pid971610 00:38:10.312 Removing: /var/run/dpdk/spdk_pid972615 00:38:10.312 Removing: /var/run/dpdk/spdk_pid973617 00:38:10.312 Removing: /var/run/dpdk/spdk_pid974693 00:38:10.312 Removing: /var/run/dpdk/spdk_pid975276 00:38:10.312 Removing: /var/run/dpdk/spdk_pid975406 00:38:10.312 Removing: /var/run/dpdk/spdk_pid975615 00:38:10.312 Removing: /var/run/dpdk/spdk_pid975767 00:38:10.312 Removing: /var/run/dpdk/spdk_pid975773 00:38:10.312 Removing: /var/run/dpdk/spdk_pid976780 00:38:10.312 Removing: /var/run/dpdk/spdk_pid977785 00:38:10.312 Removing: /var/run/dpdk/spdk_pid978792 00:38:10.312 Removing: /var/run/dpdk/spdk_pid979462 00:38:10.312 Removing: /var/run/dpdk/spdk_pid979464 00:38:10.312 Removing: /var/run/dpdk/spdk_pid979801 00:38:10.312 Removing: /var/run/dpdk/spdk_pid981200 00:38:10.312 Removing: /var/run/dpdk/spdk_pid982315 00:38:10.312 Removing: /var/run/dpdk/spdk_pid992293 00:38:10.312 Clean 00:38:10.312 12:11:54 -- common/autotest_common.sh@1451 -- # return 0 00:38:10.312 12:11:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:10.312 12:11:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:10.312 12:11:54 -- common/autotest_common.sh@10 -- # set +x 00:38:10.573 12:11:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:10.573 12:11:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:10.573 12:11:54 -- common/autotest_common.sh@10 -- # set +x 00:38:10.573 12:11:54 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:10.573 12:11:54 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:10.573 12:11:55 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:10.573 12:11:55 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:10.573 12:11:55 -- spdk/autotest.sh@394 -- # hostname 00:38:10.573 12:11:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:10.573 geninfo: WARNING: invalid characters removed from testname! 00:38:37.144 12:12:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:39.065 12:12:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:40.451 12:12:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:42.361 12:12:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:43.745 12:12:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:46.288 12:12:30 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:47.670 12:12:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:47.670 12:12:32 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:38:47.670 12:12:32 -- common/autotest_common.sh@1691 -- $ lcov --version 00:38:47.670 12:12:32 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:38:47.670 12:12:32 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:38:47.670 12:12:32 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:38:47.670 12:12:32 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:38:47.670 12:12:32 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:38:47.670 12:12:32 -- scripts/common.sh@336 -- $ IFS=.-: 00:38:47.670 12:12:32 -- scripts/common.sh@336 -- $ read -ra ver1 00:38:47.670 12:12:32 -- scripts/common.sh@337 -- $ IFS=.-: 00:38:47.670 12:12:32 -- scripts/common.sh@337 -- $ read -ra ver2 00:38:47.670 12:12:32 -- scripts/common.sh@338 -- $ local 'op=<' 00:38:47.670 12:12:32 -- scripts/common.sh@340 -- $ ver1_l=2 00:38:47.670 12:12:32 -- scripts/common.sh@341 -- $ ver2_l=1 00:38:47.670 12:12:32 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:38:47.670 12:12:32 -- scripts/common.sh@344 -- $ case "$op" in 00:38:47.670 12:12:32 -- scripts/common.sh@345 -- $ : 1 00:38:47.670 12:12:32 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:38:47.670 12:12:32 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:47.670 12:12:32 -- scripts/common.sh@365 -- $ decimal 1 00:38:47.670 12:12:32 -- scripts/common.sh@353 -- $ local d=1 00:38:47.670 12:12:32 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:38:47.670 12:12:32 -- scripts/common.sh@355 -- $ echo 1 00:38:47.670 12:12:32 -- scripts/common.sh@365 -- $ ver1[v]=1 00:38:47.670 12:12:32 -- scripts/common.sh@366 -- $ decimal 2 00:38:47.670 12:12:32 -- scripts/common.sh@353 -- $ local d=2 00:38:47.670 12:12:32 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:38:47.670 12:12:32 -- scripts/common.sh@355 -- $ echo 2 00:38:47.670 12:12:32 -- scripts/common.sh@366 -- $ ver2[v]=2 00:38:47.670 12:12:32 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:38:47.670 12:12:32 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:38:47.670 12:12:32 -- scripts/common.sh@368 -- $ return 0 00:38:47.670 12:12:32 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:47.670 12:12:32 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:38:47.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.670 --rc genhtml_branch_coverage=1 00:38:47.670 --rc genhtml_function_coverage=1 00:38:47.670 --rc genhtml_legend=1 00:38:47.670 --rc geninfo_all_blocks=1 00:38:47.670 --rc geninfo_unexecuted_blocks=1 00:38:47.670 00:38:47.670 ' 00:38:47.670 12:12:32 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:38:47.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.670 --rc genhtml_branch_coverage=1 00:38:47.670 --rc genhtml_function_coverage=1 00:38:47.670 --rc genhtml_legend=1 00:38:47.670 --rc geninfo_all_blocks=1 00:38:47.670 --rc geninfo_unexecuted_blocks=1 00:38:47.670 00:38:47.670 ' 00:38:47.670 12:12:32 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:38:47.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.670 --rc genhtml_branch_coverage=1 00:38:47.670 --rc genhtml_function_coverage=1 00:38:47.671 --rc genhtml_legend=1 00:38:47.671 --rc geninfo_all_blocks=1 00:38:47.671 --rc geninfo_unexecuted_blocks=1 00:38:47.671 00:38:47.671 ' 00:38:47.671 12:12:32 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:38:47.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.671 --rc genhtml_branch_coverage=1 00:38:47.671 --rc genhtml_function_coverage=1 00:38:47.671 --rc genhtml_legend=1 00:38:47.671 --rc geninfo_all_blocks=1 00:38:47.671 --rc geninfo_unexecuted_blocks=1 00:38:47.671 00:38:47.671 ' 00:38:47.671 12:12:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:47.671 12:12:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:38:47.671 12:12:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:47.671 12:12:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:47.671 12:12:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:47.671 12:12:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.671 12:12:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.671 12:12:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.671 12:12:32 -- paths/export.sh@5 -- $ export PATH 00:38:47.671 12:12:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.671 12:12:32 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:47.671 12:12:32 -- common/autobuild_common.sh@486 -- $ date +%s 00:38:47.671 12:12:32 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728641552.XXXXXX 00:38:47.671 12:12:32 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728641552.6VnEaj 00:38:47.671 12:12:32 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:38:47.671 12:12:32 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:38:47.671 12:12:32 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:38:47.671 12:12:32 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:47.671 12:12:32 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:47.671 12:12:32 -- common/autobuild_common.sh@502 -- $ get_config_params 00:38:47.671 12:12:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:38:47.671 12:12:32 -- common/autotest_common.sh@10 -- $ set +x 00:38:47.931 12:12:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:38:47.931 12:12:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:38:47.931 12:12:32 -- pm/common@17 -- $ local monitor 00:38:47.931 12:12:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:47.931 12:12:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:47.931 12:12:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:47.931 12:12:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:47.931 12:12:32 -- pm/common@21 -- $ date +%s 00:38:47.931 12:12:32 -- pm/common@25 -- $ sleep 1 00:38:47.931 12:12:32 -- pm/common@21 -- $ date +%s 00:38:47.931 12:12:32 -- pm/common@21 -- $ date +%s 00:38:47.931 12:12:32 -- pm/common@21 -- $ date +%s 00:38:47.931 12:12:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728641552 00:38:47.931 12:12:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728641552 00:38:47.931 12:12:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728641552 00:38:47.931 12:12:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728641552 00:38:47.931 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728641552_collect-cpu-load.pm.log 00:38:47.931 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728641552_collect-vmstat.pm.log 00:38:47.931 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728641552_collect-cpu-temp.pm.log 00:38:47.931 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728641552_collect-bmc-pm.bmc.pm.log 00:38:48.874 12:12:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:38:48.874 12:12:33 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:38:48.874 12:12:33 -- spdk/autopackage.sh@14 -- $ timing_finish 00:38:48.874 12:12:33 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:48.874 12:12:33 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:48.874 12:12:33 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:48.874 12:12:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:48.874 12:12:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:48.874 12:12:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:48.874 12:12:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:48.874 12:12:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:48.874 12:12:33 -- pm/common@44 -- $ pid=1361125 00:38:48.874 12:12:33 -- pm/common@50 -- $ kill -TERM 1361125 00:38:48.874 12:12:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:48.874 12:12:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:48.874 12:12:33 -- pm/common@44 -- $ pid=1361126 00:38:48.874 12:12:33 -- pm/common@50 -- $ kill -TERM 1361126 00:38:48.874 12:12:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:48.874 12:12:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:48.874 12:12:33 -- pm/common@44 -- $ pid=1361128 00:38:48.874 12:12:33 -- pm/common@50 -- $ kill -TERM 1361128 00:38:48.874 12:12:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:48.874 12:12:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:48.874 12:12:33 -- pm/common@44 -- $ pid=1361155 00:38:48.874 12:12:33 -- pm/common@50 -- $ sudo -E kill -TERM 1361155 00:38:48.874 + [[ -n 693873 ]] 00:38:48.874 + sudo kill 693873 00:38:48.924 [Pipeline] } 00:38:48.940 [Pipeline] // stage 00:38:48.946 [Pipeline] } 00:38:48.960 [Pipeline] // timeout 00:38:48.967 [Pipeline] } 00:38:48.981 [Pipeline] // catchError 00:38:48.986 [Pipeline] } 00:38:49.001 [Pipeline] // wrap 00:38:49.007 [Pipeline] } 00:38:49.020 [Pipeline] // catchError 00:38:49.029 [Pipeline] stage 00:38:49.031 [Pipeline] { (Epilogue) 00:38:49.044 [Pipeline] catchError 00:38:49.046 [Pipeline] { 00:38:49.059 [Pipeline] echo 00:38:49.061 Cleanup processes 00:38:49.067 [Pipeline] sh 00:38:49.408 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:49.408 1361266 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:49.408 1361826 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:49.424 [Pipeline] sh 00:38:49.716 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:49.716 ++ grep -v 'sudo pgrep' 00:38:49.716 ++ awk '{print $1}' 00:38:49.716 + sudo kill -9 1361266 00:38:49.729 [Pipeline] sh 00:38:50.020 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:02.266 [Pipeline] sh 00:39:02.558 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:02.558 Artifacts sizes are good 00:39:02.574 [Pipeline] archiveArtifacts 00:39:02.582 Archiving artifacts 00:39:02.720 [Pipeline] sh 00:39:03.011 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:03.028 [Pipeline] cleanWs 00:39:03.039 [WS-CLEANUP] Deleting project workspace... 00:39:03.039 [WS-CLEANUP] Deferred wipeout is used... 00:39:03.047 [WS-CLEANUP] done 00:39:03.049 [Pipeline] } 00:39:03.067 [Pipeline] // catchError 00:39:03.080 [Pipeline] sh 00:39:03.371 + logger -p user.info -t JENKINS-CI 00:39:03.382 [Pipeline] } 00:39:03.396 [Pipeline] // stage 00:39:03.401 [Pipeline] } 00:39:03.416 [Pipeline] // node 00:39:03.422 [Pipeline] End of Pipeline 00:39:03.489 Finished: SUCCESS